Review
Abstract
Background: Pain, a leading reason people seek medical care, has become a social issue. Automated pain assessment has seen notable advancements over recent decades, addressing a critical need in both clinical and everyday settings.
Objective: The objective of this survey was to provide a comprehensive overview of pain and its mechanisms, to explore existing research on automated pain recognition modalities, and to identify key challenges and future directions in this field.
Methods: A literature review was conducted, analyzing studies focused on various modalities for automated pain recognition. The modalities reviewed include facial expressions, physiological signals, audio cues, and pupil dilation, with a focus on their efficacy and application in pain assessment.
Results: The survey found that each modality offers unique contributions to automated pain recognition, with facial expressions and physiological signals showing particular promise. However, the reliability and accuracy of these modalities vary, often depending on factors such as individual variability and environmental conditions.
Conclusions: While automated pain recognition has progressed considerably, challenges remain in achieving consistent accuracy across diverse populations and contexts. Future research directions are suggested to address these challenges, enhancing the reliability and applicability of automated pain assessment in clinical practice.
doi:10.2196/53026
Keywords
Introduction
Pain is “an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage,” according to the International Association for the Study of Pain [
]. However, the discussion on the most precise definition of pain is still ongoing, and the advances in the understanding of pain instantiate the biopsychosocial perspective on pain to capture evidence-based understanding and the evolution of pain [ ]. On the basis of the pain origin, it is categorized as nociceptive (due to stimulation of sensory nerve fibers), neuropathic (due to impaired somatosensory nervous system), or psychogenic pain (caused, increased, or prolonged by mental, emotional, or behavioral factors). On the basis of the time duration of the pain, it may be categorized as acute (short duration) or chronic (long duration, may last >3 months).Approximately 20% of adults have chronic pain in the United States, and chronic pain is the most common reason adults seek medical care. For society, chronic pain contributes to an estimated US $560 million each year in medical expenses, lost productivity, and disability caused by types of pain such as low back pain, arthritis, and joint pain [
, ]. These negative impacts make chronic pain a persistent public health concern. Inappropriate pain management can lead to very deleterious physical, psychological, social, and financial consequences for patients. Untreated pain can lead to chronic pain syndrome, which is often accompanied by decreased mobility, impaired immunity, decreased concentration, anorexia, and sleep disturbances. More importantly, the use of prescription opioids for the treatment of chronic noncancer pain is associated with a substantial risk for abuse, dependence, and overdose [ ].As the first step of pain management, pain assessment holds an essential role [
]. Unprecise pain assessment can lead to severe consequences. Undertreatment of pain not only causes psychological consequences but also physiological consequences, for example, increased blood pressure and heart rate. By contrast, overtreatment of pain may result in nausea, vomiting, or constipation immediately and drug addiction in the long term. Traditionally, pain assessment is conducted through self-reports or observational scales. Self-report refers to the conscious communication of pain-related information by the person in pain, typically using spoken or written language or gestures. Various pain rating scales have been developed to capture patients’ self-report of pain intensity. Traditional approaches used to play an important role in pain assessment, including the Verbal Rating Scale [ ], the Visual Analog Scale [ ], the Numerical Rating Scale [ ], and the Wong-Baker FACES Scale [ ].However, such scoring methods are not feasible for certain patients, such as such as those who are unconscious. For this, different observational pain scales, such as the Behavioral Pain Scale [
], Pain Assessment in Advanced Dementia [ ], or Neonatal Infant Pain Scale [ ], are used in clinical settings. Most scales consider facial expressions, vocalizations, and body language, while some include vital parameters. It is difficult to assess and compare the validity of the various scales because studies differ a lot in design, methodology, participants, and conceptualization of the pain phenomenon. Pain assessment through observation is very challenging and is affected by the subjective biases and errors in beliefs of the observer [ ].To solve these challenges, it is necessary to develop an objective, accurate, continuous pain assessment method, as shown in
. In the last decades, multiple studies have been conducted to evaluate the feasibility of automated pain assessment using multimodality and machine learning (ML) techniques. This paper surveys and reviews the recent advances in the field in terms of datasets, modalities, and ML models. Finally, we present the challenges remaining in the field and propose future directions.
Pain Mechanism
The pain mechanism is not completely understood because of its complexity and diversity [
]. Pain, created by the brain, is a psychological state rather than a physical one [ ]. Unlike pain, nociception refers to the response of the peripheral and central nervous systems to internal or external stimuli, triggered by the activation of nociceptors [ ]. The noxious stimulus damages the tissue or potentially activates the nociceptors in the peripheral structure. Then, the information is transmitted to the spinal cord dorsal horn or the nucleus caudalis. From there, the information continues to the cerebral cortex via the brainstem in the brain, and the perception of pain is generated. Thus, no brain, no pain [ ]. presents the mechanism of pain.
Usually, pain is regarded as chronic or acute according to its duration. Acute pain is a type of sudden pain. The mechanism of momentary pain is well understood [
]. The nociceptors generate the nociception, and the information is transmitted to the brain, where the perception of pain is caused. There are 2 major types of nociceptors responding to different stimuli: C-fibers, associated with unmyelinated axons, and A-delta fibers, associated with thinly myelinated axons [ ]. C-fibers generate slow, diffuse pain, while A-delta fibers are related to sharp, pricking pain. Silent nociceptors typically respond to endogenous chemical mediators related to tissue injury [ ].Chronic pain, lasting >3 months, does not have a useful biological function and is challenging to treat due to its varied etiologies [
- ]. According to the International Classification of Diseases, Eleventh Revision, chronic pain can be categorized into musculoskeletal, neuropathic, visceral, and cancer pain [ ].Psychological distress refers to a diffuse subjective experience as an internal response to noxious stimuli. Many patients argue that psychological pain is more severe than intense physical pain [
]. Chronic pain can lead to psychological pain and depression, while depression can exacerbate chronic pain [ , ]. Psychogenic pain is physical pain caused or increased by mental and emotional factors [ ]. Treatments such as transcutaneous electrical nerve stimulation or psychotherapy are often more effective for reducing psychogenic pain compared to traditional painkillers [ , ].The body responds to pain via multiple physiological processes: the sympathetic nervous system (SNS), neuroendocrine system, immune system, as well as emotions [
]. The SNS, known for the fight or flight response, increases heart rate and blood pressure via hormones such as catecholamines, epinephrine, and norepinephrine when activated [ ]. The SNS also activates sweat glands via acetylcholine, reflecting the active level of SNS through the volume of secreted sweat within a time range [ ].Pain Datasets
Data that are representative are crucial in the creation of a pain recognition system and the demonstration of its efficacy. Crucially, the system should perform optimally within the intended medical context, a fact that must be validated through clinical studies involving patients. In the early stages of development, experimental pain research with healthy volunteers could be useful. This approach allows for strictly controlled conditions, larger participant pools, and the repeated application of pain stimuli. These data are foundational to the development of ML models for automated pain detection.
For studying pain in healthy adults, an external stimulus is needed. Common methods include heat applied via contact (eg, heated objects and electrical heaters) or radiant sources (eg, infrared light).
summarizes the publicly available datasets that were used for pain recognition research. The UNBC-McMaster Shoulder Pain Expression Archive Database [ ] includes 200 video sequences that capture the facial expressions of 25 participants experiencing shoulder pain. Each video sequence includes individuals performing a series of active and passive range-of-motion tests to provoke visible responses to pain, providing a unique dataset rich in both the variety and volume of pain expressions. The dataset includes self-reported and observer assessments of pain intensity at the video level, along with Facial Action Coding System (FACS) coding at the frame level. The BioVid Heat Pain Database [ ] is a collection of physiological data and videos from 90 healthy adults subjected to controlled heat stimuli. BioVid consists of several sections: A, B, and C, which focus on pain stimulation, along with sections D and E, which are dedicated to posed expressions and emotion elicitation, respectively. The MIntPAIN database [ ] collected color, depth, and thermal videos from 20 healthy adults who were subjected to approximately 1600 instances of electrical pain stimuli at 4 different intensity levels. EmoPain [ ], SenseEmotion [ ], X-ITE Pain [ ], BP4D-Spontaneous [ ], and BP4D+ [ ] datasets are substantially resources for pain and emotion studies. EmoPain contains video, audio, motion, and a surface electromyogram (sEMG) for lower back pain. SenseEmotion and X-ITE Pain include audio and physiological data from healthy adults subjected to experimental pain stimuli, while X-ITE provides thermal videos, body movement data, and electromyography measurements. BP4D-Spontaneous and BP4D+ offer facial video recordings from individuals undergoing the cold presser task, with BP4D+ further providing 3D and thermal videos, along with physiological signals.Database | Participants | Modalities | Annotation | ||||
Database with adults | |||||||
UNBC-McMaster [ | ]25 adults with shoulder pain | Video of the face (RGBa) | FACSb, VASc, and OPId | ||||
BioVid [ | ]87 healthy adults | Video of face (RGB), EDAe, electrocardiogram, and electromyography | Stimulus (calibrated per person) | ||||
MIntPAIN [ | ]20 healthy adults | Video of face (RGB, depth, and thermal) | Stimulus (calibrated per person) | ||||
EmoPain [ | ]22 adults with chronic back pain | Video, audio, electromyography, and motion capture | Self-report and naive OPI | ||||
SenseEmotion [ | ]45 healthy adults | Video of face, audio, EDA, electrocardiogram, and electromyography | Stimulus (calibrated per person) | ||||
X-ITE [ | ]134 healthy adults | Video of face, video of body, audio, EDA, electrocardiogram, and electromyography | Stimulus (calibrated per person) | ||||
BP4D-spontaneous [ | ]41 healthy adults | Video of face (RGB and 3D) | Stimulus and FACS | ||||
BP4D+ [ | ]140 healthy adults | Video of face (RGB, 3D, and thermal), heart rate, respiration rate, blood pressure, and EDA | Stimulus and FACS | ||||
Database with neonates | |||||||
iCOPE [ | ]26 healthy neonates | 204 RGB photographs of face | Category (pain, rest, cry, air puff, and friction) | ||||
YouTube [ | ]142 infants | Video and audio | FLACCf | ||||
APN-db [ | ]112 healthy neonates | Video of face (RGB) | NFLAPSg, NIPSh, and NFCSi | ||||
NPAD-ID [ | ]36 healthy neonates and 9 neonates who underwent surgery | Video of face and body (RGB) | NIPS and N-PASS | ||||
iCOPEvid [ | ]49 neonates | Video of face (grayscale) | Category (pain and no pain) | ||||
USF-MNPAD-I [ | ]36 neonates | Video of face (RGB), audio, heart rate, blood pressure, SpO2j, deoxyhemoglobin (HbH), oxyhemoglobin (HbO2) | NIPS and N-PASSk |
aRGB: Red, green, blue color model.
bFACS: Facial Action Coding System.
cVAS: Visual Analog Scale.
dOPI: Observed Pain Intensity.
eEDA: electrodermal activity.
fFLACC: Face, Legs, Activity, Cry, Consolability Scale.
gNFLAPS: Neonatal Face and Limb Acute Pain Scale
hNIPS: Neonatal Infant Pain Scale.
iNFCS: Neonatal Facial Coding System.
jSpO2: saturation of peripheral oxygen.
kN-PASS: Neonatal Pain, Agitation and Sedation Scale.
In the field of infant pain research, the iCOPE [
], YouTube [ ], APN-db [ ], iCOPEvid [ ], and USF-MNPAD-I [ ] databases are the publicly available datasets. The iCOPE consists of 204 static photographs that capture 26 neonates during various procedures. The images provide valuable insights into the facial expressions associated with infant pain experiences. The YouTube dataset offers 142 videos accompanied by audio, showcasing the reactions of different infants undergoing immunizations. The APN-db is a dataset that includes >200 videos of infants undergoing various procedures, and it features unique annotations, such as Neonatal Face and Limb Acute Pain intensity. The USF-MNPAD-I dataset collects video, audio, and physiological data from 58 neonates during their hospitalization in the neonatal intensive care unit (ICU) and is annotated using the Neonatal Infant Pain Scale and N-PASS scales.Postoperative Pain
Although automated pain assessment in controlled settings is well studied, postoperative pain has not been extensively researched due to the difficulty of data collection. Postoperative pain results from tissue injury following surgery and is critical to manage, as inadequate treatment can lead to serious physiological and psychological outcomes. Postoperative pain datasets often exhibit imbalanced distributions and may contain missing labels due to variability in patient experiences and clinical settings, further complicating accurate and comprehensive pain assessment. The NPAD-IA database [
] captures video, audio, and physiological data from 40 infants undergoing procedural (heel lancing and immunization) and postoperative (gastrostomy tube) pain. Notably, it includes postoperative pain data, addressing the complexity and variability of pain levels in real-world clinical settings, thereby enhancing the ecological validity of the assessment. Salekin et al [ ] present a novel fully automated deep learning framework to assess neonatal postoperative pain. It uses a bilinear convolutional neural network (B-CNN) to extract facial features and a recurrent neural network (RNN) to model the temporal patterns of postoperative pain. The study uses a dataset of >600 minutes of visual, vocal, and physiological data from neonates, demonstrating the feasibility and efficiency of combining B-CNN and RNN for continuous and accurate assessment of postoperative pain intensity in clinical settings. Salekin et al [ ] introduce an automated system for assessing neonatal postoperative pain by integrating visual, vocal, and physiological data. The study also uses a B-CNN for spatial feature extraction but uses a long short-term memory (LSTM) network for capturing temporal patterns, demonstrating that the multimodal spatial-temporal approach significantly outperforms unimodal methods, achieving an area under the curve (AUC) of 0.87 and accuracy of 79%. Automated postoperative pain assessment is still in its nascent stages, primarily hindered by a lack of comprehensive datasets and consistent research efforts. The current methods, often unimodal and focused on short-term procedural pain, fail to capture the complex and prolonged nature of postoperative pain. There is a pressing need for more extensive and diverse datasets to improve the accuracy and reliability of these systems. Despite these challenges, the potential benefits of automated pain assessment are immense, offering more consistent and objective pain management that can significantly enhance patient outcomes and reduce the burden on health care providers.Automatic Pain Assessment
Overview
Automated tools for pain assessment have great promise. Because pain results in different physiological and behavioral responses, signals that capture these may be used to detect the presence of pain. However, prior research work has been limited, and automated approaches have not yet become widely used in clinical practice. In this section, we briefly outline the different approaches relevant to the development of automated pain assessment methods described in the research literature. Specifically, we review their system architecture (inputs and outputs) and describe the data sources available for the research and development of ML-based automated pain assessment tools, together with an overview of system validation challenges. This section summarizes the results of the survey of automatic pain detection approaches.
The Use of Modalities
The selection of sensors is a critical aspect of automated pain assessment, as different sensors can convey varying levels of information and have different discriminative abilities. Modalities commonly used in this field can be broadly classified into 3 categories: video, audio, and physiological signals, as shown in
. Functional magnetic resonance imaging (fMRI) was found to be the most prevalent sensor in pain studies, with a prevalence score of 95.9. Electroencephalogram and electrocardiogram were also frequently used, with prevalence scores of 69.6 and 39.1, respectively. In contrast, functional near-infrared spectroscopy (fNIRS) and photoplethysmography had much lower prevalence scores of <10. Moreover, also includes information on modalities used in studies (including brain activity, cardiovascular activity, electrodermal activity (EDA), respiration activity, and pupil size). In terms of physiological signals, brain activity can be measured using electroencephalograms, fMRI, and fNIRS. Cardiovascular activity can be measured using an electrocardiogram or photoplethysmography, while EDA is often measured by skin conductance level or sEMG. To gain insight into the prevalence of each modality, we conducted a search for “Modality AND Pain AND Machine learning” (eg, “EEG AND Pain AND Machine learning”) on PubMed and Scopus, limiting the search to the period from January 1, 2010, to August 1, 2023. We then recorded the number of results and normalized them to the range of (0-100) for each database. The prevalence scores were then calculated as the average of the normalized results from PubMed and Scopus.Category and name | Description | Prevalencea | References | ||||
Video | |||||||
Video analysis | Analyzes facial expressions and body movements to assess pain levels [ | ].100 | [ | , ]||||
Audio | |||||||
Audio analysis | Analyzes vocal characteristics and speech patterns to assess pain [ | ].48.2 | [ | ]||||
Pupil size | |||||||
Pupil size measurement | Measures changes in pupil diameter as an indicator of pain [ | ].12.7 | [ | , ]||||
Brain activity | |||||||
Electroencephalogram | It is a test that detects tiny electrical charges that result from the activity of brain cells [ | ].69.6 | [ | - ]||||
Functional magnetic resonance imaging | It uses magnetic resonance imaging to measure the changes in hemodynamics caused by neuronal activity [ | ].95.9 | [ | - ]||||
Functional near-infrared spectroscopy | It uses scattering arising from the main components of blood upon exposure to near-infrared light (600 nm to 900 nm) to measure changes in oxyhemoglobin and deoxyhemoglobin during brain activity [ | ].7.9 | [ | , ]||||
Cardiovascular activity | |||||||
Electrocardiogram | It is a test that measures the electrical activity of the heartbeat [ | ].39.1 | [ | - ]||||
Photoplethysmograph | It is an optical technique that can be used to detect blood volume changes in the microvascular bed of tissue [ | ].9.4 | [ | , ]||||
Electrodermal activity | |||||||
Skin conductance level | It is the measurement of the electrical conductivity of the skin [ | ].25.9 | [ | , , ]||||
Surface electromyogram | It is a technique to measure muscle activity noninvasively using surface electrodes placed on the skin overlying the muscle [ | ].25.6 | [ | , , ]||||
Respiration | |||||||
Respiration | Respiration refers to a person’s breathing and the movement of air into and out of the lungs [ | ].17.5 | [ | , ]
aPrevalence is measured by the weighted search results from Scopus and PubMed, covering the period from 2010 to 2023, using the keywords “Name” AND “Pain” AND “Machine learning” as of August 1, 2023; the results are standardized on a scale of 0 to 100.
As shown in
, video was found to be the most prevalent sensor in pain studies, with a prevalence score of 100. fMRI, electroencephalogram, and electrocardiogram were also frequently used, with prevalence scores of 95.9, 69.6, and 39.1, respectively. In contrast, fNIRS and photoplethysmography had much lower prevalence scores of <10.Convenience and feasibility should also be considered when selecting sensors. For example, some sensors such as electroencephalograms and fMRI are nonwearable and can be invasive, which may limit their utility in certain settings. Moreover, complex signals require more sophisticated processing techniques and computing resources, which may not be practical in some situations, such as those involving microprocessors.
Facial Expression
Overview
Facial expression during the experience of pain is not unspecific grimacing but conveys pain-specific information. Studies investigating facial expressions of pain have most often used FACS [
], the gold standard for facial expression research. FACS is a fine-grained, objective, and anatomically based coding system that differentiates between 44 facial movements known as action units (AUs). Coders are trained to apply specific operational criteria to determine the onset and offset as well as the intensity of the AUs. Using FACS, it was shown that facial expressions of pain are composed of a small subset of facial activities, namely, lowering the brows (AU4), cheek raise or lid tightening (AUs 6 and 7), nose wrinkling or raising the upper lip (AUs 9 and 10), and eye closure for >0.5 seconds (AU 43). Prkachin and Solomon [ ] developed the Prkachin and Solomon Pain Intensity metric based on this observation, which is a 16-level scale based on the contribution of the individual intensity of pain-related AUs and is defined as follows:Pain=AU4+(AU6,AU7)+(AU9+AU10)+AU43
shows samples of different PSPI levels from UNBC-McMaster pain dataset. The list of pain-related AUs has been further expanded in more extensive research [ ] to include lip corner puller (AU12), lip stretch (AU20), lips part (AU25), jaw drop (AU26), and mouth stretch (AU27).

Facial activities during experimental and clinical pain are largely inborn but not uniform across individuals. People display different parts or combinations of facial activities. Cluster analyses identified four distinct facial activity patterns: (1) narrowed eyes with raised upper lip or nose wrinkling and furrowed brows, (2) narrowed eyes with furrowed brows, (3) narrowed eyes with mouth opening, and (4) raised eyebrows, which are less frequent and stable, often indicating novelty or surprise in response to pain. Recognizing these patterns improves pain detection more than focusing on a single expression. Thus, acknowledging variability in facial expressions can enhance pain communication.
Facial expression analysis uses spatial and spatiotemporal features. Spatial features capture static details of the face, such as the geometric and textural characteristics of the eyes, eyebrows, nose, lips, and facial contours, using techniques such as facial landmark detection, geometric feature extraction, Gabor filters, local binary patterns (LBPs), and histogram of oriented gradients (HOG). Spatiotemporal features capture dynamic changes in expressions over time using techniques such as optical flow or differences between consecutive frames. Advanced methods may involve 3D facial modeling or LSTM networks to identify temporal dependencies. Combining spatial and spatiotemporal features provides a comprehensive analysis of facial expressions.
Vision-Based Spatial Features
In the research conducted by Ashraf et al [
] and Lucey et al [ ], features derived from the Active Appearance Model were input into support vector machine (SVM) classifiers for the purpose of frame-level pain recognition. In addition, they implemented pain detection at the sequence level by averaging the frame-level predictions. Gholami et al [ ] used a Bayesian extension of SVM, known as the relevance vector machine, to differentiate between instances of pain and no pain in neonates. They also used this methodology to assess varying pain intensity levels. Meanwhile, Hammal et al [ ] identified 4 levels of pain intensity through the use of log-normal filter-based features and an SVM classifier. Kaltwang et al [ ] conducted a comparative study involving 3 separate methodologies. They used facial landmarks, discrete cosine transform, and LBP features to train 3 distinct relevance vector regression (RVR) models for estimating Prkachin and Solomon Pain Intensity. The best results were achieved by training an additional RVR model that consolidated the predictions from the 3 previously trained RVR models. The system [ ] used a pyramid HOG for shape information and a pyramid LBP for appearance information, offering a more automated and objective approach to pain monitoring.Pedersen [
] implementation used a 4-layer contractive autoencoder, along with SVM, which resulted in an effective pain detection system at the frame level. Egede et al [ ] extracted features using both deep learning models and handcrafted methodologies. Facial landmarks, HOG, and deep vectors drawn from pretrained VGG-16 [ ] and ResNet-50 [ ] models were used. Rudovic et al [ ] introduced a personalized federated deep learning technique for pain estimation derived from facial images. This approach involved using a compact convolutional neural network (CNN) architecture across various clients without the need to share their facial images. Contrary to the full sharing of model parameters, the personalized federated deep learning technique keeps the last layer localized. Hosseini et al [ ] used a pretrained ResNet-18 model on the large emotion recognition dataset FER+ [ ] and used transfer learning techniques to improve accuracy and performance. Huang et al [ ] proposed a pain-awareness multistream CNN approach for feature extraction, focusing on specific regions most relevant to pain expression instead of entire face images. Semwal and Londhe [ ] proposed an Ensemble of Compact CNNs using 3 compact CNNs (variants of VGG, MobileNet, and GoogleNet) and integrating their predictions using the average ensemble rule. Kharghanian et al [ , ] developed a 4-layer convolutional deep belief network, trained as convolutional restricted Boltzmann machines to extract features. Semwal et al [ ] introduced a novel fusion method for pain severity assessment in unconstrained environments using a decision-level fusion of 3 distinct features: data-driven red, green, blue color model (RGB) features, entropy-based texture features, and complementary features from both RGB and texture data. Using 3 CNNs (VGG-TL, ETNet, and DSCNN) with transfer learning, entropy texture network, and dual stream CNN, the model and various data augmentation techniques avoid overfitting and improve performance. The system demonstrates a 94% F1-score on a self-generated dataset from an unconstrained hospital setting.Alghamdi and Alaghband [
] presented a facial expressions–based automatic pain assessment system using 2 concurrent subsystems that analyze both the full face and upper half of the face through pretrained CNNs, such as VGG16, InceptionV3, ResNet50, or ResNeXt50. Dai et al [ ] developed a real-time pain detection system by mixing pain and emotion datasets for optimal real-time performance and conducting a cross-corpus test. The study experiments with both AU-based and non–AU-based methods, ultimately implementing the method on a robot for frozen shoulder therapy, thus emphasizing the need for balanced and ecologically valid pain datasets and the importance of real-world application and testing. Karamitsos et al [ ] use the Haarcascade frontal face detector (OpenCV) for face detection; then, faces undergo gray scaling, histogram equalization, cropping, mean filtering, and normalization. The CNN is built upon a modified VGG16 architecture, achieving an impressive 92.5% accuracy. Barua et al [ ] used a shutter blinds–based model inspired by spontaneous facial expressions and patch-based learning to achieve >95% accuracy in pain detection from facial images, leveraging transfer learning for efficient deep feature extraction. The model uniquely uses horizontal dynamic-sized patches, or “shutter blinds,” to mine hidden facial signatures. Semwal et al [ ] assess pain severity in unconstrained hospital environments using a decision-level fusion of 3 distinct types of features: data-driven RGB, entropy-based texture, and complementary features. They used 3 CNNs (VGG-CNN with transfer learning, entropy texture network, dual stream CNN) and various data augmentation techniques to avoid overfitting. The system demonstrates a 94.0% F1-score on a self-generated dataset from an unconstrained hospital setting.Li et al [
] introduced a video-based infant monitoring system to analyze infant pain using 3 databases: Train-Data, Data-Clinic, and Data-YouTube. Using Fast Region-Based Convolutional Neural Network with object tracking and a hidden Markov model, the system precisely detects infant expressions and states. With a significant dataset from varied sources, including >16,000 images and real-world clinical videos, the approach offers enhanced accuracy and reliability in infant pain detection. Zamzmi et al [ ] introduced a neonatal CNN that uses a cascaded architecture with 3 convolutional branches. This design merges image-specific and general information for pain detection. The neonatal CNN demonstrated 91% accuracy and 0.93 AUC on the Neonatal Pain Assessment Dataset and 84.5% accuracy on the Infant Classification of Pain Expression dataset. Witherow et al [ ] developed Facial Expressions Fusing Betamix Selected Landmark Features (FACE-BE-SELF), a novel deep adaptive method for adult-child facial expression classification. It fuses facial landmark data with deep feature representations, achieving domain-invariant classification. Using a unique mixture of beta distributions, facial features are selected based on expression, domain, and identity correlations. The FACE-BE-SELF method stands out by concurrently adapting adult-child domains, providing a unified expression representation for both groups. Compared to standard approaches, it surpasses in aligning latent representations of expressions across age groups.Vision-Based Spatiotemporal Features
Bargshady et al [
] present an ensemble deep learning model that combines a 3-stream hybrid neural network with CNNs to extract facial features and classify pain levels. The VGG-Face, integrated with principal component analysis (PCA), is used for early feature extraction, while a 3-layer hybrid of CNN and bidirectional LSTM is developed for late fusion classification. This approach, tested on multiple pain databases, surpasses competing models with an accuracy of >89%. Sparse Autoencoders for Facial Expressions-Based Pain Assessment [ ] reconstructs the upper part of the face from input images and then feeds both the original and reconstructed images into 2 concurrent and coupled InceptionV3 using Sparse Autoencoders. This dual-input approach emphasizes the upper facial features, essential for pain detection. By eliminating the need for conventional preprocessing steps such as face detection and adeptly handling varying head poses, Sparse Autoencoders for Facial Expressions-Based Pain Assessment offers enhanced performance and accuracy across multiple datasets, even in challenging profile views. Karamitsos et al [ ] modified temporal convolutional network algorithm and processed facial features extracted from fine-tuned VGG-Face and PCA combined with hue, saturation, and value color spaces. The temporal convolutional network–based approach showcases faster performance and higher efficiency, achieving an accuracy of 92.44% and an AUC of 85%. Bargshady et al [ ] propose an enhanced joint hybrid CNN-Bidirectional LSTM network model by leveraging a fine-tuned VGG-Face for feature extraction and apply PCA to focus on the most significant features, improving computational efficiency. These features are then classified by a CNN-Bidirectional LSTM network hybrid network into 4 levels of pain intensity.The 3D CNNs have gained attention in several studies. Tavakolian and Hadid [
, ] created a 3D CNN that captures dynamic facial representations from videos and emphasizes the typical use of a fixed temporal kernel depth in research, which often misses capturing different time ranges. In the study by Huang et al [ ], a hybrid network by combining 3D, 2D, and 1D CNNs has been introduced to extract spatiotemporal, spatial, and geometric features from image sequences. Wang et al [ ] used the convolutional 3D network for pain expression recognition, which primarily uses a 3×3×3 convolutional layer. However, this method often fails to capture the full spectrum of facial expression variations. To address this, they combined 3 distinct features: 3D CNN, HOG, and geometric features using support vector regression for pain estimation. They integrated the convolutional 3D network for spatiotemporal facial feature extraction and used the HOG in 2D images for geometric information to discern pain levels in facial expressions. De et al [ ] present a deep learning architecture, the Decomposed Multiscale Spatiotemporal Network (DMSN). It uses 3 innovative blocks, DMSN-A, DMSN-B, and DMSN-C, to efficiently capture varied facial dynamics across conditions such as depression and pain. DMSN-A block focuses on pain, which might vary rapidly. It uses a sequence of 3×1×1 temporal convolutions, capturing short to long temporal ranges. The studies by Granger and Cardinal [ ] and Praveen et al [ ] implemented weak-supervised domain adaptation, focusing on a shift from general affective expressions to specific pain expressions. Their framework used an inflated 3D CNN [ ] with 3 convolutional layers and 3 inception modules, extracting both spatial and temporal data from videos.Physiological Signals
Overview
While facial expressions are commonly used to identify pain, physiological signals are also a valuable modality for automatic pain detection. As detailed in the Pain Mechanism section, pain triggers changes in physiological signals, such as increased heart rate and skin conductivity, due to the activation of the SNS and peripheral nervous system [
]. Conversely, changes in physiological signals can indicate the presence of pain. However, extracting discriminative information from physiological signals is challenging. By contrast, they are objective indicators of pain because they cannot be artificially controlled [ ], while exterior signals, such as facial expressions and gestures, may be unreliable, as individuals can deliberately disguise their behaviors. It makes physiological signals more reliable than exterior signals. In addition, physiological signals can be measured during daily life, while video and hand gestures can only be measured in laboratory settings. Thus, researchers have invested significant effort in exploring the feasibility of using physiological signals for pain assessment. Recent advances in sensor technology, signal processing, feature extraction, and ML algorithms are essential to the success of physiological signal–based automatic pain assessment.This section provides a comprehensive review of the latest developments in pain detection approaches based on physiological signals. Four key components are exploited: (1) the use of modalities, (2) measurement devices, (3) feature extraction methods, and (4) ML models. The use of modalities refers to the type of physiological signals used for pain detection, including electroencephalogram, fMRI, electrocardiogram, and EDA. Measurement devices include both wearable and nonwearable devices, encompassing cardiac monitors, skin conductivity sensors, temperature sensors, accelerometers, and more. Feature extraction methods are techniques used to extract informative features from physiological signals, such as time-domain features, frequency-domain features, and time-frequency features. Finally, ML models, such as SVM, artificial neural networks, and random forest (RF), are used to classify pain based on the extracted features.
Electroencephalogram as a Pain Indicator
Electroencephalography is a noninvasive technique widely used in the automatic detection of pain. The electrodes detect electrical activity and amplify it, producing a graphical representation of the brain activity over time. Electroencephalogram recordings typically show a series of waveforms or oscillations that are grouped into different frequency bands, such as delta, theta, alpha, beta, and gamma. These frequency bands have been associated with different mental states and cognitive functions. Various studies have shown the potential of electroencephalogram-based pain detection, and different approaches have been proposed to extract discriminative features from electroencephalogram signals for pain classification. For instance, Panavaranan et al [
] extracted the power spectral density of an electroencephalogram using fast Fourier transform and used SVM to classify thermal pain. Hadjileontiadis et al [ ] proposed a novel approach that analyzes wavelet higher-order spectral features of an electroencephalogram to predict tonic cold pain. Vijayakumar et al [ ] extracted time-frequency wavelet representations of independent components from electroencephalogram data and trained a RF model to classify pain levels, achieving an intrasubject accuracy of 93.26%.The use of electroencephalogram techniques for pain detection has great potential to provide objective measures of pain, as these methods directly measure brain activity related to pain perception. However, these techniques also have limitations, including high cost, limited availability, and the need for specialized expertise for data analysis.
fMRI as a Pain Indicator
fMRI is a powerful neuroimaging tool that measures changes in blood flow within the brain as a proxy for neural activity. By measuring changes in the blood oxygen level–dependent signal, fMRI can indirectly map changes in neural activity in response to a specific stimulus, such as a painful stimulus.
The fMRI technique has been widely used in pain research, revealing a network of brain regions that are activated by painful stimuli. These regions include the primary and secondary somatosensory cortex, thalamus, insular cortex, and anterior cingulate cortex, among others. The activation of these regions is believed to be involved in the sensory and affective components of pain processing.
Activation of these regions is thought to be involved in the sensory discrimination aspects of pain processing. Thus, neuroimaging techniques allow us to visualize and quantify brain activities and then quantify pain. It is frequently used in the research of automatic pain assessment. Wager et al [
] used the least absolute shrinkage and selection operator ML regression algorithm to recognize induced heat pain by assessing the fMRI activity patterns. Shen et al [ ] derived primary, dorsal, and ventral visual networks from blood oxygen level–dependent fMRI scans by using independent component analysis and used a ML algorithm SVM to distinguish between patients with chronic low back pain and healthy volunteers and achieved an accuracy of 79.3%. Tu et al [ ] proposed a novel sliced inverse regression–based fMRI decoding method to reduce the fMRI data dimension and showed overperformance compared to traditional regularization-based decoding analyses (principal component analysis and discriminant analysis, partial least squares-discriminant analysis, and least absolute shrinkage and selection operator). Robinson et al [ ] scanned fMRI and applied ML algorithms to classify patients with fibromyalgia and healthy volunteers.Electrocardiogram as a Pain Indicator
An electrocardiogram is a widely used technique to measure the electrical activity of the heart and its changes during each cardiac cycle. The electrocardiogram waveform consists of several characteristic waves and intervals that correspond to the different phases of the cardiac cycle, including the P wave, QRS complex, and T-wave. By analyzing the size, shape, and timing of these waves and intervals, a wide range of cardiac conditions, such as arrhythmias, heart attacks, and heart failure, can be diagnosed. The use of electrocardiograms in pain detection assumes that pain can cause a physiological stress response, leading to cardiovascular changes that are related to the pain stimuli. The autonomic nervous system responds to pain by increasing sympathetic tension and decreasing parasympathetic tension, leading to an increase in heart rate and blood pressure. By analyzing the electrocardiogram signal, features that reflect the autonomic nervous system status, such as heart rate variability (HRV), can be extracted and used to detect pain.
Several studies have shown the potential of electrocardiograms for pain detection. Walter et al [
] collected electrocardiogram data from 90 subjects using heat as pain stimuli and created the BioVid dataset, which also included skin conductance level, sEMG, and video data. Adjei et al [ ] performed spectral analysis on electrocardiogram data and extracted HRV features, such as the low-frequency (LF) component and high-frequency (HF) component, which were significantly correlated with pain level. Jiang et al [ ] extracted time-domain and frequency-domain HRV features from electrocardiogram data to classify pain level and obtained an AUC of 0.82 in the receiver operating characteristic curve.However, there are also studies that suggest a lack of correlation between HRV and pain level. Meeuse et al [
] found no significant correlation between HRV features and heat pain level in their study. It is important to note that an electrocardiogram alone may not be sufficient to accurately detect pain, and other physiological signals, such as skin conductance and electromyography, may need to be considered as well. Furthermore, individual differences in pain perception and the variability of pain stimuli may affect the reliability of pain detection using an electrocardiogram.EDA as a Pain Indicator
EDA, also referred to as galvanic skin response, is a physiological gauge of the skin’s electrical conductance. This conductance changes according to the functioning of sweat glands within the skin [
]. The measurement of EDA is a noninvasive process involving the placement of 2 electrodes, often on the fingers or palms. Activation of the SNS, triggered by situations such as stress or pain, leads to increased sweat gland activity, causing a rise in the skin’s electrical conductance.Within the context of automated pain recognition, EDA serves as a valuable indicator due to its reflection of SNS activity [
], which is closely linked to the body’s response to pain. Numerous research studies have highlighted EDA’s potential in pain detection. For instance, in the BioVid dataset developed by Walter et al [ ], EDA was used as one of the methods, revealing a correlation between EDA features and the intensity of pain.sEMG is another important tool for measuring EDA in automatic pain detection. sEMG can measure the electrical activity of muscles and has been used to measure facial expression [
] or muscle movement of specific body parts, such as the back muscles [ ]. These measures can provide additional information about the pain experience and may be used in combination with other modalities for better pain detection accuracy [ ].Devices
Data collection is indeed crucial in research, especially in statistical and ML-based studies. It is essential to ensure that the data collected are accurate, informative, and clean. However, selecting the right measurement devices is crucial for obtaining high-quality data.
is a summary of previously used measurement devices in pain assessment studies. [ - ] presents 3 typical types of devices used in physiological signal–based pain assessment: wristband, headset, and chest band. The importance of wearable devices in this context cannot be overstated; they enable ubiquitous, real-time data collection [ , ], especially with the rise of body sensor networks. This technological advancement allows for extensive data gathering in wearable and remote settings, making continuous monitoring both feasible and affordable.
Device | Physiological signals | Connectivity | Type | FDAa-cleared | Reference |
Bioharness 3 | Electrocardiogram | Bluetooth | Chest band | Yes | [ | , ]
Affectiva Q sensor | EDAb | Bluetooth | Wristband | Yes | [ | ]
Procomp+ | EDA and heart rate | Wired | Measurement hub | Yes | [ | ]
Emotive EPOC 14-channel electroencephalogram wireless recording headset | Electroencephalogram | Bluetooth | Headset | No | [ | ]
RespiBan | Respiration rate | Bluetooth | Chest band | No | [ | ]
Empatica E4 | EDA, BVPc, and respiration rate | Wired | Wired sensor | Yes | [ | ]
Infiniti 3000A platform with Flex and Pro sensors | BVP, electrocardiogram, and EDA | Wired | Sensorhub | Yes | [ | , ]
Polar RS800CX | HRVd | Wired | Watch | No | [ | ]
aFDA: Food and Drug Administration.
bEDA: electrodermal activity.
cBVP: blood volume pulse.
dHRV: heart rate variability.

There are several studies that have evaluated the usability and reliability of different measurement devices. Researchers can refer to these studies when choosing measurement devices for their own research. Ajayi et al [
] evaluated the Empatica E4 by comparing the results with nurse-recorded data and pooling questionnaires from participants. Nazari et al [ ] tested the reliability of Bioharness and Fitbit measures of heart rate and activity at rest status. Rawstorn et al [ ] evaluated the BioHarness by testing it on volunteers with both sinus rhythm and atrial fibrillation during simulated daily activities as well as low-, moderate-, and high-intensity exercises. Loberg et al [ ] evaluated 4 different respiratory effort sensors and compared them with a respiratory sensor from NOX Medical as the golden reference device.Feature Extraction
Overview
In the field of ML, pattern recognition, and image processing, feature extraction is a crucial step that involves transforming raw data into informative and nonredundant features to facilitate subsequent learning and generalization. Physiological signals typically carry implicit information that needs to be revealed through appropriate feature extraction techniques. While deep learning methods often generate features automatically, traditional ML methods require manual feature extraction.
For physiological signals, time window segmentation is commonly used to extract features. This involves segmenting the signals into chunks of equal time intervals and generating a row vector for each segment with 1 feature value for each feature, for example, the mean value of the segmentation. Physiological signal features can be classified into 4 categories: time-domain, frequency-domain, time-frequency-domain, and space-domain features.
Time-domain features describe the statistical and morphological properties of physiological signals, such as maximum value, SD, entropy, and mean R-R interval in electrocardiogram signals. Frequency-domain features characterize the spectral properties of signals, such as LF band power and low-high frequency ratio. Time-frequency-domain features consider both time-domain and frequency-domain properties simultaneously to account for the short duration and changing nature of physiological signals. Space-domain features, such as multispectral imaging and topography, are used to represent topographic characteristics of brain activity features, including electroencephalograms, fMRI, and fNIRS.
The complexity of physiological signals can guide feature selection. Signals with high stochastic stationarity and low signal-to-noise ratio, such as photoplethysmography and EDA, are considered low in complexity and can be represented by 1 or 2 feature domains. Signals with low stochastic stationarity and high signal-to-noise ratio, such as electrocardiogram, electroencephalogram, and fMRI, are high in complexity and require 3 to 4 feature domains to capture all relevant information. Nowadays, numerous Python libraries are available that facilitate the rapid extraction of features in physiological signals [
, ], electroencephalograms [ ], video [ ], and audio [ ] domains. A summary of the commonly used features is presented in .Category, feature, and description | Reference | ||
HRVatime-domain measures | [ | ]||
SD of NNb intervals | |||
SD of RRc intervals | |||
STDd of the average NN intervals for each 5 min segment of a 24-hour HRV recording | |||
Mean of the STD of all the NN intervals for 5-min segment of a 24-hour HRV recording | |||
Percentage of successive RR intervals that differ by >50 ms | |||
Average difference between the highest and lowest heart rates during each respiratory cycle | |||
Root mean square of successive RR interval differences | |||
Integral of the density of the RR interval histogram divided by its height | |||
Baseline width of the RR interval histogram | |||
HRV frequency-domain measures | [ | ]||
Absolute power of the ultra LFe band (≤0.003 Hz) | |||
Absolute power of the very-LF band (0.0033-0.04 Hz) | |||
Peak frequency of the LF band (0.04-0.15 Hz) | |||
Absolute power of the LF band (0.04-0.15 Hz) | |||
Relative power of the LF band (0.04-0.15 Hz) in normal units | |||
Relative power of the LF band (0.04-0.15 Hz) | |||
Peak frequency of the HFf band (0.15-0.4 Hz) | |||
Absolute power of the HF band (0.15-0.4 Hz) | |||
Relative power of the HF band (0.15-0.4 Hz) in normal units | |||
Relative power of the HF band (0.15-0.4 Hz) | |||
Ratio of LF to HF power | |||
HRV nonlinear measures | [ | ]||
Area of the ellipse that represents the total HRV | |||
Poincare plot SD perpendicular to the line of identity | |||
Poincare plot SD along the line of identity | |||
Ratio of SD1 to SD2 | |||
Detrended fluctuation analysis, which describes short-term fluctuations | |||
Detrended fluctuation analysis, which describes long-term fluctuations | |||
Correlation dimension, which estimates the minimum number of variables required to construct a model of system dynamics | |||
Amplitude | |||
Peak amplitude | [ | ]||
Peak to peak amplitude | [ | ]||
Root mean square | [ | ]||
Mean absolute value | [ | ]||
Mean relative time of the peaks | [ | ]||
Mean relative time of the valleys | [ | ]||
Variability | |||
IQR | [ | ]||
Range | [ | ]||
SD | [ | ]||
Variance | [ | ]||
Mean resting rate | [ | ]||
Slope resting rate | [ | ]||
Stationarity | |||
Integral degree of stationarity | [ | ]||
Modified integral degree of stationarity | [ | ]||
Modified mean degree of stationarity | [ | ]||
Median | [ | ]||
SD of SD vector | [ | ]||
Entropy | |||
Approximate entropy | [ | ]||
Fuzzy entropy | [ | ]||
Sample entropy | [ | ]||
Shannon entropy | [ | ]||
Spectral entropy | [ | ]||
Linearity | [ | ]||
Lag dependence function | [ | ]||
Population lag dependence function | [ | ]||
Similarity | |||
Correlation coefficient | [ | ]||
Median coherence | [ | ]||
Mean coherence | [ | ]||
Modified mean coherence | [ | ]||
Modified integral of coherence | [ | ]||
Mutual information | [ | ]||
Frequency | |||
Bandwidth | [ | ]||
Center frequency | [ | ]||
Median frequency | [ | ]||
Mean frequency | [ | ]||
Mode frequency | [ | ]||
Zero crossings | [ | ]
aHRV: heart rate variability.
bNN: neural network.
cRR: 2 consecutive R waves.
dSTD: SD.
eLF: low-frequency.
fHF: high-frequency.
Brain Activity Features
Physiological signals, including electroencephalograms, fMRI, and fNIRS, have unique characteristics that require specific feature extraction techniques. Electroencephalogram signals, for example, have high topological complexity as multiple channels are measuring simultaneously. They can be divided into different frequency bands, such as delta, theta, alpha 1, alpha 2, beta 1, beta 2, gamma 1, and gamma 2. To assess pain, Panavaranan et al [
] used power spectral density features calculated using fast Fourier transform. Hadjileontiadis et al [ ] combined continuous wavelet transform with higher-order statistics and spectra to create a new feature space for electroencephalograms. Rissacher et al [ ] found temporal parietal alpha of electroencephalograms to be a useful feature for pain assessment.In fMRI, Tu et al [
] proposed a novel dimension reduction method by incorporating singular value decomposition into sliced inverse regression to overcome the limitations of sliced inverse regression when dealing with high-dimensional data. This method was used to assess pain, achieving 77.61% binary classification accuracy.There are various feature extraction approaches for electroencephalogram signals, as summarized by Behzadfar et al [
]. For brain activity signals in general, van der Miesen et al [ ] outlined the state and progress in pain detection using these signals.Electrocardiogram Features
Unlike general statistical feature extraction methods, electrocardiogram feature extraction involves more human experience on electrocardiograms and is more interpretable. Shaffer et al [
] provided an overview of HRV features, covering time-domain, frequency-domain, and non-linear measures. Time-domain and frequency-domain features are widely used in pain assessment studies. On the BioVid dataset, Werner et al [ ] derived mean resting rate, root mean square of successive differences, and slope resting rate from the electrocardiogram signal. Gruss et al [ ], Campbell et al [ ], and Kachele et al [ ] used the same 3 features in their studies. Kachele et al [ ] also applied 4-level wavelet decomposition on detected R peaks to extract the mean alpha 1 coefficients. Jiang et al [ ] extracted time-domain features, such as average interval between normal heart beats, SD of normal heart beat intervals, root mean square of successive differences, and percentage of successive RR intervals that differ by more than 20 ms, and frequency-domain features, such as LF, HF, and LF or HF, from an electrocardiogram and attained an AUC of 0.82 for induced electrical pain and an AUC of 0.75 for induced thermal pain.Apart from HRV, other features have been used for various purposes. For instance, some studies have used morphological features, such as QRS complex duration and amplitude, T-wave amplitude, and ST-segment changes, for diagnosing cardiac abnormalities [
].EDA and Electromyography Features
EDA and electromyography are critical tools in pain detection because they measure physiological responses that are directly linked to the autonomic nervous system’s reactions, which vary significantly with pain perception [
, ]. Walter et al [ ] systematically gathered and summarized feature extraction methods for EDA or electromyography signals from previous research and categorized them into mathematical groups of (1) amplitude, (2) frequency [ ], (3) stationarity [ ], (4) entropy [ ], (5) linearity [ ], and (6) variability. In total, 33 different features were listed, and their efficiency in pain assessment on the BioVid dataset was proved. Then, Gruss et al [ ] deployed the feature table and derived it to 39 features. Campbell et al [ ] also developed a feature list based on the study by Walter et al [ ]. They also proposed a ML-based feature selection approach that deploys univariate feature selection and sequential forward selection for 100 epochs, with cross-validation as the metric to explore the optimal feature set. From their results, a relationship table between features and pain was displayed, illustrating the discriminative strength of features. In addition, amplitude, power, and unique functional features of electromyography signals are noted as useful in all different feature sets. summarized the features used in previous studies.Models
Overview
In the field of ML, the “no free lunch” theorem has been referred to often when talking about model selection [
]. This theorem illustrates that “any two optimization algorithms are equivalent when their performance is averaged across all possible problems,” which implies that no single algorithm always has the best performance for all ML tasks. Thus, appropriate model selection is necessary for the success of ML-based pain assessment. In this section, we compare different ML algorithms by illustrating their advantages and disadvantages and their applicable scenarios. provides a summary of the prevalent ML algorithms used in pain assessment.Model | Advantages | Disadvantages | Reference |
Support vector machine |
|
| [ | , ]
Decision tree |
|
| [ | ]
Random forest |
|
| [ | , ]
Neural networks |
|
| [ | , ]
SVM for Pain Classification
The first commonly used ML model in physiological signal–based automatic pain detection is SVM [
, ]. SVM is a type of generalized linear classifier that classifies data in a supervised learning way [ ]. Its decision boundary is the maximum margin hyperplane for learning samples. SVM also includes kernel tricks, which makes it a substantially nonlinear classifier. The final decision of SVM only depends on the support vectors, which makes it suitable for small sample learning. On the contrary, SVM lacks the ability to provide restoration of variables to the formation of derived predictors [ ], which is important in some areas such as financial prediction and health applications. In addition, SVM requires delicate preprocessing and tuning to acquire the best performance. Panavaranan et al [ ] applied polynomial kernel SVM on electroencephalogram data and obtained an accuracy of 96.97%. Gruss et al [ ] used SVM on the BioVid dataset and gained 90.94% accuracy on pain tolerance classification. In addition, Jiang et al [ ] obtained an AUC of 0.82 with the use of SVM. More recently, Badura et al [ ] achieved 94% accuracy using Gaussian kernel SVM.Decision Tree for Pain Classification
Unlike SVM, decision tree is known for its interpretable characteristic. The decision tree algorithm is a method of approximating the value of a discrete function [
, ]. It is a typical classification method that uses an induction algorithm to generate readable rules and decision trees and then uses decision-making to analyze the new data. Essentially, a decision tree is a process of classifying data through a series of rules. Because of their inherent interpretability, tree-based algorithms help ML processes move beyond the “black box” model [ ]. By contrast, due to the simple structure of tree-based models, overfitting easily happened on tree-based models [ ]. Besides, they lack the ability to deal with missing data due to the continuity of tree structure.RF for Pain Classification
RF is an algorithm that integrates multiple trees through the idea of ensemble learning. Its basic unit is a decision tree, and essentially, it belongs to a large branch of the ML “ensemble learning” method. Intuitively, each decision tree acts as a classifier, so for a given input sample, N decision trees will produce N classification results. RF integrates all classification voting results and designates the category with the most votes as the final output, which is a “bagging” idea. With the tree base and bagging theory RF holds, it has advantages such as preventing overfitting, easy to parallelize, and friendly with high-dimensional data [
]. In contrast, RFs require more time for training and prediction compared to decision trees. Vijayakumar et al [ ] applied RF on 25 subjects’ electroencephalogram data and obtained 89.45% accuracy. Naeini et al [ ] used RF on the BioVid dataset and achieved an accuracy of 79%. Werner et al [ ] used RF on their new “X- ITE” dataset and achieved 94.3% accuracy for phasic electrical pain classification.Neural Networks for Pain Classification
NN have also been used by scholars for automatic pain detection [
, ]. NN abstracts the human brain neuron network from the perspective of information processing, establishes a certain simple model, and composes different networks according to different connection structures. Thanks to the development of the digital society, the amount of data available for ML has grown substantially. NN, which can go deep in its layer structure, can reveal implicit information from data. Therefore, as the amount of data grows, the performance of NN keeps increasing, while traditional algorithms, such as SVM and RF, are limited. Nevertheless, NN has the defect of “black box” characteristic. Such uninterpretability keeps NN from blooming in certain fields, such as text and code analysis [ ], judicial decision, and artificial intelligence medicine, because such fields require a clear, understandable, and interpretable decision-making process. Martinez et al [ ] used NN on the BioVid dataset and obtained 82.75% accuracy on multitask classification. Jiang et al [ ] applied an artificial neural network on 30 subjects and gained an average accuracy of 83.3%. The deviation of neural networks is widely used in automated pain assessment, such as CNN [ ], RNN [ ], and LSTM neural network [ ].Audio Analysis
Infant crying is a common sign of discomfort, hunger, or pain. It conveys information that helps caregivers assess the infant’s emotional state and react appropriately. Crying analysis can be divided into two main stages: (1) the signal processing stage, which includes preprocessing the signal and extracting representative features; and (2) the classification stage. We classified the existing methods of signal processing stage into (1) time-domain methods; (2) frequency-domain methods; and (3) cepstral-domain methods.
Time-domain analysis is the analysis of a signal with respect to time (ie, the variation of a signal’s amplitude over time). Linear prediction coding is one of the most common time-domain methods for analyzing sounds. The main concept behind linear prediction coding is the use of a linear combination of the past time-domain samples to predict the current time-domain sample. Other time-domain features that are commonly used for infants’ sound analysis are energy, amplitude, and pause duration. Vempada et al [
] presented a time-domain method to detect discomfort-relevant cries. The proposed method was evaluated on a dataset consisting of 120 cry corpuses collected during pain (30 corpuses), hunger (60 corpuses), and wet diaper (30 corpuses). We want to note that the paper does not provide information about the stimulus that triggered the pain state or the data collection procedure. The infants’ age ranges from 12 to 40 weeks. All corpses were recorded using a Sony digital recorder with a sampling rate of 44.1 kHz. In the feature extraction stage, two features were calculated: (1) short-time energy, which is the average of the square of the sample values in a suitable window; and (2) pause duration within the crying segment. Part of these features were used to build SVM, and the remaining features were used to evaluate its performance. The recognition performance of pain cry, hunger cry, and wet diaper cry were 83.33%, 27.78%, and 61.11%, respectively. The average recognition rate was 57.41%.Pupil Size
The measurement of changes in pupil size has been shown to be a promising physiological indicator of pain intensity. Pupil size can be used to monitor the effects of painful stimuli in the brain. The pupil dilates in response to pain due to the activation of the sympathetic branch, which releases norepinephrine, and the inhibition of the parasympathetic branch, which is responsible for constriction of the pupil. This section discusses the mechanism of using pupil dilation as a pain indicator and literature reviews of using pupil dilation for automated pain assessment.
The pupil dilation is a complex physiological response regulated automatically by 2 muscles in the eye, the sphincter pupillae and the dilator pupillae. The sphincter pupillae is controlled by the parasympathetic system to contract the pupil, while the dilator pupillae is dominated by the sympathetic system to dilate the pupil [
].Höfle et al [
] investigated the influence of different luminance conditions on pupillometry for pain detection and found that the baseline pupil size values significantly differed under different luminance conditions, while the peak dilation remained the same. Bertrand et al [ ] explored the influence of gender and anxiety on pupil dilation for pain detection and concluded that pupil dilation changes similarly in both men and women and are exacerbated in the presence of anxiety. Connelly et al [ ] conducted an experiment on 30 children undergoing elective surgical correction of pectus excavatum and found that maximum pupil size, percent change in pupil size, and maximum constriction velocity were the most related features to pain intensity. Chapman et al [ ] reported a delay of 1.25 seconds in 20 adult volunteers under noxious stimulation, while Eisenacha et al [ ] reported a peak in pupil size with a lag of 4.25 seconds after the onset of heat pain on 28 adult volunteers. Wang et al [ ] found that the pupillary response together with ML algorithms could be a promising method of objective pain level assessment by measuring pupillary response during induced cold pain on 32 subjects.Multimodal Pain Detection
Including more modalities can possibly increase information density, which leads to increased accuracy. Thus, researchers have been increasingly turning to multimodal approaches to enhance the accuracy and reliability of automated pain assessment systems. These approaches combine information from multiple modalities, such as biomedical signals and facial expressions, to provide a more comprehensive understanding of the patient’s pain experience. Furthermore, a multimodal approach can capture a more nuanced and diverse range of pain responses, which is particularly important given the wide variation in pain perception among individuals with different characteristics and cultural backgrounds.
presents a typical flow of multimodal pain assessment.
Fusion strategies commonly used in multimodal pain assessment can be categorized into early fusion and late fusion. Early fusion involves the combination of features from different modalities before the training of a classifier, while late or decision fusion combines the predictions of individual classifiers after training. Common methods of combining predictions include fixed methods such as taking the mean or product and trainable methods such as using a pseudoinverse.
illustrates the early and late fusion strategies. Some research has explored combining early and decision fusion by merging specific features at the feature level and then fusing those with other features at the decision level [ ].
The first study to combine video and physiological signals for automated pain detection was conducted by Werner et al [
], who used an early fusion strategy to concatenate features from both modalities. The optimal fusion set is found to be the combination of all video and physiological signals, achieving accuracies of 80.6% and 77.8% for person-specific and generic classifiers, respectively, in detecting baseline and highest tolerable pain using a RF ensemble–based classifier. Kachele et al [ ] applied both early and late fusion strategies using SVM with linear kernel and RF for recognizing baseline and highest tolerable pain, achieving accuracies of 68.2% and 76.6% for early and late fusion, respectively.Continuing the BioVid dataset, Kachele et al [
] applies early and late fusion techniques with new features included, achieving slightly better results with late fusion (83.1%) than early fusion (82.7%). Thiam et al [ ] proposed a hierarchical fusion architecture that divides multimodal data into 3 subsets. These subsets are used for the first layer of RF training, followed by pseudo-inverse mapping, multilayer perceptron mapping, and a final layer that combines both pseudo-inverse and multilayer perceptron fusion mapping. Kessler et al [ ] took advantage of the fusion strategy proposed by Thiam et al [ ] and applied it to remote photoplethysmography.Other studies focus on incorporating additional modalities, such as audio. Velana et al [
] published the SenseEmotion database, which captures video, physiological signals, and audio for the first time. Thiam et al [ ] merged features from video, physiological signal, and audio data on the SenseEmotion dataset, exploring different data fusion strategies, including early fusion, group late fusion, and individual late fusion. Results show that individual late fusion outperforms other strategies slightly on leave-subject-out experiment, while group late fusion slightly outperforms on user-specific task. There is also a dataset for neonatal pain assessment that includes video, audio, and physiological signals [ , ].Recent studies have explored new fusion approaches. Bellmann et al [
] proposed a dominant channel fusion approach that identifies the most relevant input channel and combines it with the remaining channels to create an ensemble of classifiers. Bellman et al [ ] proposed a novel late fusion approach that combines a mixture of experts and stacked generalization approaches and is assessed on different datasets involving the biophysiological modalities electromyography, electrocardiogram, and EDA. Thiam et al [ ] proposed an information theoretic approach that uses a deep denoising convolutional autoencoder to learn and aggregate latent representations based on each input channel.However, it is evident that late fusion, using multiple models as part of an ensemble learning approach, requires significantly more computational power and storage space compared to early fusion methods. As pain assessment is an emerging field, the current focus is predominantly on enhancing predictive accuracy rather than on resource use, and discussions on model complexity are relatively scarce. However, with the advent of Tiny ML and the rise of edge computing [
], running large models on microprocessors becomes challenging. Consequently, early fusion might gain popularity on edge devices, where the ability to run simpler, more compact models efficiently is crucial. This shift could make early and lightweight fusion approaches more viable and preferred in scenarios where computational resources are limited. In addition, with the increasing inclusion of multimodal data, we can envisage future fusion methods potentially incorporating recently developed self-attention algorithms [ ].Discussion
The pain assessment field is faced with several challenges and opportunities for future development. This section will focus on 3 areas of concern—data, ML techniques, and ethical considerations—and then propose future research directions.
Data
Automatic pain assessment is challenged by the limited availability of clinical pain data, as most studies have focused on experimental or induced pain. Widely used datasets such as BioVid, BP4D+, and X-ITE are collected from healthy volunteers and use external thermal or electrical pain. These studies are conducted under consistent experimental conditions that differ from real-world scenarios. Furthermore, induced pain has different mechanisms than disease pain, which encompasses different types of pain, such as nociceptive and central pain. Therefore, it is important to test models trained on experimental data using clinical pain data. In addition, more clinical pain data should be collected to facilitate the development of automatic pain assessment models and enable their use in clinical trials.
Pupil dilation has been identified as a promising indicator of brain activity and pain levels. However, in previous studies, pain was often used as the stimuli for measuring brain activity, rather than the focus of the study. Consequently, only a few studies have directly correlated pupil dilation with pain levels. A potential research direction is to include pupil dilation in the automatic pain assessment modality family. Pupil dilation has been shown to be effective in affective computing, with datasets such as the MAHNOB-HCI and SEED containing eye-tracking data that demonstrate the contribution of pupil data to arousal detection. As pain can also be regarded as physiological arousal, transferring pupil dilation to automatic pain assessment studies is a worthwhile area of research.
Personalization of Pain Responses
In the following subsection, we explore personalized pain detection, focusing on the considerable differences in pain experiences among individuals. Pain perception varies widely due to a mix of biological factors and social-psychological influences. These differences are shaped by demographics such as gender, age, and ethnicity, which are linked to varying rates of chronic pain. In addition, factors such as genetic predispositions and psychological processes also significantly impact pain responses, whether in clinical settings or experimental scenarios. Importantly, these elements interact in complex ways, crafting the unique pain experiences of everyone. Research has highlighted that genetic markers associated with pain can differ across genders and ethnicities and interact with psychological aspects such as stress, affecting pain perception. These myriads of interacting factors culminates in a distinctive set of influences for each person’s experience of pain [
].Jiang et al [
] introduced a method that enhances pain assessment by incorporating personalized features. They used ML to analyze individual pain data, enabling the model to tailor its predictions to each patient’s unique physiological and psychological characteristics. This approach improves the accuracy of pain management by adapting to personal pain profiles. Casti et al [ ] developed a platform to improve pain diagnosis by leveraging personalized data. Using a combination of visual, speech, and physiological indicators, they used ML techniques to tailor assessments to individual patient profiles, enhancing the precision and effectiveness of pain management strategies. Martinez et al [ ] proposed a method to refine pain estimation by integrating personalized features. They used ML to analyze individual facial expressions, allowing the model to adjust its predictions based on each person’s unique facial expressiveness score. This approach enhances the accuracy of Visual Analog Scale estimations by adapting to individual pain profiles [ ].Most papers on personalized pain assessment claim personalization at the model level, focusing on enhancing ML models to suit individualized approaches or using ML techniques to delve deeper into databases for extracting personalized information to improve predictions. The predominant reliance on public databases for research is evident, as most researchers use these readily available datasets. This reliance restricts personalization efforts to the data provided by these databases, making highly tailored training challenging. In addition, most pain-related datasets globally are derived from experiments involving artificially induced pain, which must pass rigorous ethical or clinical trial reviews, further limiting the quantity of available data. Looking to the future, personalization will undoubtedly be a crucial focus. It is foreseeable that researchers will collect more personalized data during experiments, including variables such as personality traits and ethnicity. This will likely lead to the generation of more nuanced datasets that include varied physiological responses to different pain stimuli, enhancing the granularity and effectiveness of personalized pain management solutions.
Real-Time Pain Detection
Building on our earlier discussion about the personalization of pain responses, it is essential to delve into another critically relevant clinical application: real-time monitoring [
]. The goal of such monitoring is not just to detect pain but to enable timely and effective interventions that can significantly enhance patient outcomes. Real-time monitoring of pain becomes particularly crucial in postoperative care, where accurately gauging a patient’s pain levels is vital for adjusting analgesic dosages. This not only helps in managing the pain effectively but also minimizes the risk of both undermedication and overmedication, which can lead to complications such as opioid dependency or inadequate pain relief. In ICUs, the stakes are even higher. Many patients in ICUs are unable to communicate due to their conditions or sedation, making verbal reports of pain unreliable. Here, real-time monitoring systems can play a transformative role by continuously tracking pain indicators through physiological signals such as heart rate, blood pressure, and facial expressions. These data can then be analyzed to provide a dynamic, real-time assessment of pain, informing caregivers when an intervention is necessary. Moreover, real-time monitoring integrates seamlessly with the concept of personalized pain management. By continuously collecting and analyzing data specific to each patient, health care providers can tailor their interventions more precisely to the individual’s pain profile and response to treatment. This approach not only improves the quality of care but also enhances patient comfort and satisfaction. As technology advances, the potential for real-time pain monitoring grows. Innovations in wearable technology, ML algorithms, and data integration are paving the way for even more accurate and responsive pain management systems. These systems promise to transform how pain is managed in health care settings, making care more proactive, patient centered, and effective.In the academic sphere, the development of real-time pain monitoring is primarily concentrated on 2 aspects: improving model efficiency to enable fast judgments suitable for real-time applications and developing practical tools such as wearable devices and mobile apps to facilitate widespread implementation. Enhancing the processing speed of models involves not only maintaining accuracy but also integrating advanced ML technologies, such as deep learning. Meanwhile, the development of tools such as wearables and mobile apps allows for the noninvasive collection of physiological data and real-time analysis, helping patients and health care providers to promptly assess pain levels and treatment effectiveness. This combination of improved models and practical tools is driving pain management toward more precise, personalized, and proactive solutions. Kong et al [
] introduced a smartphone app that enhances real-time pain detection using EDA signals collected from a wrist-worn device. They tested the app with thermal grill and electrical pulse data, demonstrating high accuracy in pain detection with a RF model. This approach offers a practical solution for objective, near–real-time pain assessment in everyday settings. Dai et al [ ] address automatic pain detection using a mix of pain and emotion datasets to enhance model robustness, achieving 88.4% accuracy. They criticize CNNs for overfitting on biased data and validate their method through experiments on a humanoid robot in physiotherapy, emphasizing the importance of real-time, real-world testing and assessing the system’s practical utility and accuracy.In summary, the advancement of real-time pain monitoring represents a significant enhancement in health care, enabling precise and timely interventions that are tailored to the unique needs of each patient. This technology not only improves the accuracy of pain assessments but also enriches the quality of care by integrating cutting-edge ML models and wearable technologies. As this field continues to evolve, it holds the promise of transforming pain management into a more responsive, personalized, and patient-centered practice.
ML Techniques
Although deep learning has revolutionized computer vision and physiological signal analysis, traditional ML algorithms still dominate the field of physiological signal–based automatic pain assessment. One possible reason for this is that deep learning requires extensive data, which is time consuming and resource intensive to collect. Therefore, studies often include only a small number of participants, typically in the tens, making it difficult to gather comprehensive datasets.
In this context, transfer learning, a prominent topic in artificial intelligence, offers a promising alternative solution. Transfer learning involves applying knowledge gained from a source domain to a new target domain, which can be particularly useful in scenarios where data collection is challenging. Differing data distributions between the source and target domains can lead to performance degradation if models are applied directly. Transfer learning helps bridge this gap, ensuring better model performance across different settings [
].Kächele et al [
] proposed an adaptive confidence learning method for personalizing pain intensity estimation systems, demonstrating the efficacy of transfer learning in this field. Feature extraction involved specific preprocessing steps for each signal type, such as bandpass filtering and artifact correction for electromyography. A multistage ensemble classifier was applied to learn the confidence of a regression system. This method involved selecting confident samples from unlabeled data of the test participants to iteratively adapt the model. Their experiments showed that the adaptive learning approach significantly improved the performance of pain intensity estimation.Chen et al [
] implemented “TrAdaboost,” a transfer learning algorithm, to improve facial expression recognition, including pain expressions. They used the PAINFUL database, which contains video sequences of 25 patients with shoulder injuries, encompassing 48,398 frames of spontaneous pain expressions. The primary challenge addressed was the variability in pain expressions across different individuals. They proposed an inductive transfer learning algorithm to develop person-specific models. This algorithm first trains a set of weak classifiers on source data from multiple subjects and then selects the most relevant classifiers for the target subject. Experimental results showed that inductive transfer learning significantly improved pain detection accuracy. For example, the AUC for pain detection increased from 0.769 to 0.782 with just 10 target samples and reached 0.891 with 100 samples. Furthermore, this approach drastically reduced training time compared to traditional methods, making it feasible for rapid retraining in clinical settings.While traditional ML remains prevalent in automatic pain assessment due to data constraints, transfer learning presents a viable alternative. It addresses the challenges associated with varying data distributions and limited dataset sizes, enhancing model robustness and performance. Future research should explore the potential of transfer learning algorithms further, integrating them into clinical practice to improve pain management outcomes.
Ethical Considerations
Automatic pain assessment raises several ethical concerns that need to be addressed. One primary concern is the privacy and security of patients’ health data. The use of physiological signals, such as facial expressions, speech patterns, and pupil dilation, to assess pain levels can lead to the collection of sensitive health data. Therefore, it is essential to ensure that the data collected are secure and protected from unauthorized access.
Another ethical consideration is the potential for bias in automatic pain assessment models. ML models are only as good as the data they are trained on, and if the training data are biased, the model will be biased too. Bias can result in inaccurate pain assessment, leading to inadequate pain management and, in some cases, even harm to patients. Therefore, it is crucial to ensure that the data used to train the models are representative and unbiased.
Future Directions
Automated pain assessment has made significant strides in recent years, leveraging technological advancements and data-driven approaches to enhance the accuracy and efficiency of pain detection. However, several promising directions for future research remain unexplored. Addressing these areas could lead to the development of more sophisticated and reliable automated pain assessment systems.
First, integrating data from various sources, such as pupil dilation, voice analysis, and body movement, could offer a more comprehensive understanding of pain. This requires a more comprehensive, clinical, and clean database to be released. Second, exploring novel deep learning architectures, including transformer-based models and generative adversarial networks, may yield improved performance in pain assessment tasks. These architectures could capture intricate patterns and dependencies within pain-related data, leading to enhanced predictive capabilities. Third, collaboration with health care professionals is crucial to validate the effectiveness and reliability of automated pain assessment systems in real-world clinical settings. Integrating these systems into clinical workflows could provide valuable insights and assist health care providers in making informed decisions. Finally, using transfer learning can provide new insights. In scenarios where large, annotated datasets are scarce, exploring transfer learning techniques and methods to adapt models to smaller datasets could prove beneficial. These approaches could enable the development of accurate pain assessment models even with limited training data.
Conclusions
This survey reviewed the current advancements in automated pain assessment using ML techniques. Traditional pain assessment methods, reliant on self-reports and observational scales, face significant limitations, particularly for patients who are noncommunicative. We explored various modalities for automated pain detection, including facial expressions, physiological signals, audio, and pupil dilation. While each modality has its strengths, combining multiple modalities can enhance accuracy but also introduces challenges in data fusion and model complexity. Despite progress, challenges remain, such as the scarcity of diverse clinical pain datasets and ethical concerns regarding patient privacy. Personalized pain assessment models are also necessary due to variability in pain perception across populations. Future research should focus on developing more robust algorithms and leveraging deep learning and transfer learning. Collaborative efforts to create comprehensive pain datasets are crucial, as is integrating real-time pain monitoring into clinical practice. In summary, automated pain assessment has the potential to transform pain management. Continued interdisciplinary research and collaboration are key to overcoming current challenges and fully realizing these technologies’ benefits.
Acknowledgments
RF was responsible for writing the Abstract and Introduction sections on physiological signals and pupil size, the multimodal study, the Discussion and Conclusions sections, and organizing and formatting the paper. EH was responsible for writing the Facial Expression section. RZ was responsible for writing the Pain Mechanism and Electrodermal Activity sections. SR was responsible for collecting information, reviewing, and final editing. HH was responsible for reviewing and funding acquisition.
Conflicts of Interest
None declared.
Summary of studies table.
PDF File (Adobe PDF File), 139 KBReferences
- Merskey H. The definition of pain. Eur Psychiatr. Apr 16, 2020;6(4):153-159. [CrossRef]
- Williams AC, Craig KD. Updating the definition of pain. Pain. Nov 18, 2016;157(11):2420-2423. [CrossRef] [Medline]
- Yong RJ, Mullins PM, Bhattacharyya N. Prevalence of chronic pain among adults in the United States. Pain. Feb 01, 2022;163(2):e328-e332. [CrossRef] [Medline]
- Gaskin DJ, Richard P. The economic costs of pain in the United States. J Pain. Aug 2012;13(8):715-724. [FREE Full text] [CrossRef] [Medline]
- Manchikanti L, Helm S, Fellows B, Janata JW, Pampati V, Grider JS, et al. Opioid epidemic in the United States. Pain Physician. Jul 2012;15(3 Suppl):ES9-E38. [FREE Full text] [CrossRef] [Medline]
- Fink R. Pain assessment: the cornerstone to optimal pain management. Proc (Bayl Univ Med Cent). Jul 11, 2000;13(3):236-239. [FREE Full text] [CrossRef] [Medline]
- Gracely RH, McGrath P, Dubner R. Ratio scales of sensory and affective verbal pain descriptors. Pain. Jun 1978;5(1):5-18. [CrossRef] [Medline]
- McCormack HM, Horne DJ, Sheather S. Clinical applications of visual analogue scales: a critical review. Psychol Med. Nov 09, 1988;18(4):1007-1019. [CrossRef] [Medline]
- Downie WW, Leatham PA, Rhind VM, Wright V, Branco JA, Anderson JA. Studies with pain rating scales. Ann Rheum Dis. Aug 01, 1978;37(4):378-381. [FREE Full text] [CrossRef] [Medline]
- Wong DL, Baker CM. Smiling faces as anchor for pain intensity scales. Pain. Jan 2001;89(2-3):295-300. [CrossRef] [Medline]
- Dehghani H, Tavangar H, Ghandehari A. Validity and reliability of behavioral pain scale in patients with low level of consciousness due to head trauma hospitalized in intensive care unit. Arch Trauma Res. Mar 30, 2014;3(1):e18608. [FREE Full text] [CrossRef] [Medline]
- Warden V, Hurley AC, Volicer L. Development and psychometric evaluation of the Pain Assessment in Advanced Dementia (PAINAD) scale. J Am Med Dir Assoc. 2003;4(1):9-15. [CrossRef] [Medline]
- Lawrence J, Alcock D, McGrath P, Kay J, MacMurray SB, Dulberg C. The development of a tool to assess neonatal pain. Neonatal Netw. Sep 1993;12(6):59-66. [Medline]
- Kappesser J, de C Williams AC. Pain estimation: asking the right questions. Pain. Feb 2010;148(2):184-187. [CrossRef] [Medline]
- Merskey H. The taxonomy of pain. Med Clin North Am. Jan 2007;91(1):13-20, vii. [CrossRef] [Medline]
- Gorczyca R, Filip R, Walczak E. Psychological aspects of pain. Ann Agric Environ Med. 2013;Spec no. 1:23-27. [FREE Full text] [Medline]
- Garland EL. Pain processing in the human nervous system: a selective review of nociceptive and biobehavioral pathways. Prim Care. Sep 2012;39(3):561-571. [FREE Full text] [CrossRef] [Medline]
- Council NR, Criado A. Recognition and alleviation of pain in laboratory animals. Lab Anim. Oct 01, 2010;44(4):380. [CrossRef]
- Kandel ER, Schwartz JH, Jessell TM. Principles Of Neural Science. Volume 4. New York, NY. McGrawhill; 2000.
- Julius D, Basbaum AI. Molecular mechanisms of nociception. Nature. Sep 13, 2001;413(6852):203-210. [CrossRef] [Medline]
- Treede RD, Rief W, Barke A, Aziz Q, Bennett MI, Benoliel R, et al. A classification of chronic pain for ICD-11. Pain. Jun 2015;156(6):1003-1007. [FREE Full text] [CrossRef] [Medline]
- Markenson JA. Mechanisms of chronic pain. Am J Med. Jul 31, 1996;101(1A):6S-18S. [FREE Full text] [CrossRef] [Medline]
- Borsook D. A future without chronic pain: neuroscience and clinical research. Cerebrum. May 2012;2012:7. [FREE Full text] [Medline]
- Mee S, Bunney BG, Reist C, Potkin SG, Bunney WE. Psychological pain: a review of evidence. J Psychiatr Res. Dec 2006;40(8):680-690. [CrossRef] [Medline]
- Bair MJ, Robinson RL, Katon W, Kroenke K. Depression and pain comorbidity: a literature review. Arch Intern Med. Nov 10, 2003;163(20):2433-2445. [CrossRef] [Medline]
- Von Korff M, Simon G. The relationship between pain and depression. Br J Psychiatry Suppl. Jun 1996;1688(30):101-108. [CrossRef] [Medline]
- Engel GL. Psychogenic pain and the pain-prone patient. Am J Med. Jun 1959;26(6):899-918. [CrossRef] [Medline]
- Bassler M, Krauthauser H, Hoffmann SO. Inpatient psychotherapy with chronic psychogenic pain patients. Psychother Psychosom Med Psychol. 1994;44(9-10):299-307. [Medline]
- Paxton SL. Clinical uses of TENS. A survey of physical therapists. Phys Ther. Jan 1980;60(1):38-44. [CrossRef] [Medline]
- Ziemssen T, Kern S. Psychoneuroimmunology--cross-talk between the immune and nervous systems. J Neurol. May 2007;254 Suppl 2(S2):II8-I11. [CrossRef] [Medline]
- Teff KL. Visceral nerves: vagal and sympathetic innervation. JPEN J Parenter Enteral Nutr. Sep 2008;32(5):569-571. [CrossRef] [Medline]
- Singaram S, Ramakrishnan K, Selvam J, Senthil M, Narayanamurthy V. Sweat gland morphology and physiology in diabetes, neuropathy, and nephropathy: a review. Arch Physiol Biochem. Aug 05, 2024;130(4):437-451. [CrossRef] [Medline]
- Lucey P, Cohn JF, Prkachin KM, Solomon PE, Matthews I. Painful data: the UNBC-McMaster shoulder pain expression archive database. In: Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition. 2011. Presented at: FG '11; March 21-25, 2011:57-64; Santa Barbara, CA. URL: https://ieeexplore.ieee.org/document/5771462 [CrossRef]
- Walter S, Gruss S, Ehleiter H, Tan J, Traue HC, Werner P, et al. The biovid heat pain database data for the advancement and systematic validation of an automated pain recognition system. In: Proceedings of the 2013 IEEE International Conference on Cybernetics. 2013. Presented at: CYBCO '13; June 13-15, 2013:128-131; Lausanne, Switzerland. URL: https://ieeexplore.ieee.org/document/6617456 [CrossRef]
- Haque MA, Bautista RB, Noroozi F, Kulkarni K, Laursen CB, Irani R, et al. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities. In: Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition. 2018. Presented at: FG '18; May 15-19, 2018:250-257; Xi'an, China. URL: https://ieeexplore.ieee.org/document/8373837 [CrossRef]
- Aung MS, Kaltwang S, Romera-Paredes B, Martinez B, Singh A, Cella M, et al. The automatic detection of chronic pain-related expression: requirements, challenges and the multimodal emopain dataset. IEEE Trans Affective Comput. Oct 1, 2016;7(4):435-451. [CrossRef]
- Velana M, Gruss S, Layher G, Thiam P, Zhang Y, Schork D, et al. The SenseEmotion database: a multimodal database for the development and systematic validation of an automatic pain- and emotion-recognition system. In: Proceedings of the 4th IAPR TC 9 Workshop on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction. 2016. Presented at: MPRSS '16; December 4, 2016:127-139; Cancun, Mexico. URL: https://link.springer.com/chapter/10.1007/978-3-319-59259-6_11 [CrossRef]
- Gruss S, Geiger M, Werner P, Wilhelm O, Traue HC, Al-Hamadi A, et al. Multi-modal signals for analyzing pain responses to thermal and electrical stimuli. J Vis Exp. Apr 05, 2019;(146). [CrossRef] [Medline]
- Zhang X, Yin L, Cohn JF, Canavan S, Reale M, Horowitz A, et al. BP4D-Spontaneous: a high-resolution spontaneous 3D dynamic facial expression database. Image Vis Comput. Oct 2014;32(10):692-706. [CrossRef]
- Zhang Z, Girard JM, Wu Y, Zhang X, Liu P, Ciftci U. Multimodal spontaneous emotion corpus for human behavior analysis. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016. Presented at: CVPR '16; June 27-30, 2016:3438-3446; Las Vegas, NV. URL: https://ieeexplore.ieee.org/abstract/document/7780743 [CrossRef]
- Brahnam S, Chuang CF, Shih FY, Slack MR. SVM classification of neonatal facial images of pain. In: Proceedings of the 6th International Workshop on Fuzzy Logic and Applications. 2005. Presented at: WILF '05; September 15-17, 2005:128; Crema, Italy. URL: https://link.springer.com/chapter/10.1007/11676935_15 [CrossRef]
- Harrison D, Sampson M, Reszel J, Abdulla K, Barrowman N, Cumber J, et al. Too many crying babies: a systematic review of pain management practices during immunizations on YouTube. BMC Pediatr. May 29, 2014;14(1):134. [FREE Full text] [CrossRef] [Medline]
- Egede J, Valstar M, Torres MT, Sharkey D. Automatic neonatal pain estimation: an acute pain in Neonates database. In: Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction. 2019. Presented at: ACII '19; September 3-6, 2019:1-7; Cambridge, UK. URL: https://ieeexplore.ieee.org/document/8925480 [CrossRef]
- Zamzmi G, Pai CY, Goldgof D, Kasturi R, Ashmeade T, Sun Y. A comprehensive and context-sensitive neonatal pain assessment using computer vision. IEEE Trans Affective Comput. Jan 1, 2022;13(1):28-45. [CrossRef]
- Brahnam S, Nanni L, McMurtrey S, Lumini A, Brattin R, Slack M, et al. Neonatal pain detection in videos using the iCOPEvid dataset and an ensemble of descriptors extracted from gaussian of local descriptors. Appl Comput Inform. Jul 17, 2020;19(1/2):122-143. [FREE Full text] [CrossRef]
- Salekin MS, Zamzmi G, Hausmann J, Goldgof D, Kasturi R, Kneusel M, et al. Multimodal neonatal procedural and postoperative pain assessment dataset. Data Brief. Apr 2021;35:106796. [FREE Full text] [CrossRef] [Medline]
- Salekin MS, Zamzmi G, Goldgof D, Kasturi R, Hoppe T, Sun Y. First investigation into the use of deep learning for continuous assessment of neonatal postoperative pain. In: Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition. 2020. Presented at: FG '20; November 16-20, 2020:415-419; Buenos Aires, Argentina. URL: https://ieeexplore.ieee.org/document/9320233 [CrossRef]
- Ekman P, Friesen WV. Facial Action Coding System: Investigator's Guide. Palo Alto, CA. Consulting Psychologists Press; 1978.
- Rao KS, Koolagudi SG, Vempada RR. Emotion recognition from speech using global and local prosodic features. Int J Speech Technol. Aug 4, 2012;16(2):143-160. [CrossRef]
- Zambach SA, Cai C, Helms HC, Hald BO, Dong Y, Fordsmann JC, et al. Precapillary sphincters and pericytes at first-order capillaries as key regulators for brain capillary perfusion. Proc Natl Acad Sci U S A. Jun 29, 2021;118(26):e2023749118. [FREE Full text] [CrossRef] [Medline]
- Höfle M, Kenntner-Mabiala R, Pauli P, Alpers GW. You can see pain in the eye: pupillometry as an index of pain intensity under different luminance conditions. Int J Psychophysiol. Dec 2008;70(3):171-175. [CrossRef] [Medline]
- Connelly MA, Brown JT, Kearns GL, Anderson RA, St Peter SD, Neville KA. Pupillometry: a non-invasive technique for pain assessment in paediatric patients. Arch Dis Child. Dec 03, 2014;99(12):1125-1131. [FREE Full text] [CrossRef] [Medline]
- Li C, Pourtaherian A, van Onzenoort L, Ten WE, de With PH. Infant facial expression analysis: towards a real-time video monitoring system using R-CNN and HMM. IEEE J Biomed Health Inform. May 2021;25(5):1429-1440. [CrossRef] [Medline]
- Hadjileontiadis LJ. EEG-based tonic cold pain characterization using wavelet higher order spectral features. IEEE Trans Biomed Eng. Aug 2015;62(8):1981-1991. [CrossRef] [Medline]
- Rissacher D, Dowman R, Schuckers SA. Identifying frequency-domain features for an EEG-based pain measurement system. In: Proceedings of the 33rd Annual Northeast Bioengineering Conference. 2007. Presented at: NEBC '07; March 10-11, 2007:114-115; Stony Brook, NY. URL: https://ieeexplore.ieee.org/document/4413305 [CrossRef]
- Adjei T, Von Rosenberg W, Goverdovsky V, Powezka K, Jaffer U, Mandic DP. Pain prediction from ECG in vascular surgery. IEEE J Transl Eng Health Med. 2017;5:2800310. [FREE Full text] [CrossRef] [Medline]
- Alghamdi T, Alaghband G. SAFEPA: an expandable multi-pose facial expressions pain assessment method. Applied Sciences. Jun 16, 2023;13(12):7206. [CrossRef]
- Robinson ME, O'Shea AM, Craggs JG, Price DD, Letzen JE, Staud R. Comparison of machine classification algorithms for fibromyalgia: neuroimages versus self-report. J Pain. May 2015;16(5):472-477. [FREE Full text] [CrossRef] [Medline]
- Tu Y, Fu Z, Tan A, Huang G, Hu L, Hung Y, et al. A novel and effective fMRI decoding approach based on sliced inverse regression and its application to pain prediction. Neurocomputing. Jan 2018;273:373-384. [CrossRef]
- Shen W, Tu Y, Gollub RL, Ortiz A, Napadow V, Yu S, et al. Visual network alterations in brain functional connectivity in chronic low back pain: a resting state functional connectivity and machine learning study. Neuroimage Clin. 2019;22:101775. [FREE Full text] [CrossRef] [Medline]
- Karunakaran KD, Peng K, Berry D, Green S, Labadie R, Kussman B, et al. NIRS measures in pain and analgesia: fundamentals, features, and function. Neurosci Biobehav Rev. Jan 2021;120:335-353. [CrossRef] [Medline]
- Fernandez Rojas R, Huang X, Ou KL. A machine learning approach for the identification of a biomarker of human pain using fNIRS. Sci Rep. Apr 04, 2019;9(1):5645. [FREE Full text] [CrossRef] [Medline]
- Electroencephalogram (EEG). Johns Hopkins Medicine. URL: https://www.hopkinsmedicine.org/health/treatment-tests-and-therapies/electroencephalogram-eeg#:~:text=An%20EEG%20is%20a%20test,activity%20of%20your%20brain%20cells [accessed 2024-04-29]
- Jiang M, Mieronkoski R, Rahmani AM, Hagelberg N, Salanterä S, Liljeberg P. Ultra-short-term analysis of heart rate variability for real-time acute pain monitoring with wearable electronics. In: Proceedings of the 2017 IEEE International Conference on Bioinformatics and Biomedicine. 2017. Presented at: BIBM '17; November 13-16, 2017:1025-1032; Kansas City, MO. URL: https://ieeexplore.ieee.org/document/8217798 [CrossRef]
- Chu Y, Zhao X, Yao J, Zhao Y, Wu Z. Physiological signals based quantitative evaluation method of the pain. IFAC Proc Vol. 2014;47(3):2981-2986. [CrossRef]
- Werner P, Al-Hamadi A, Niese R, Walter S, Gruss S, Traue HC. Towards pain monitoring: facial expression, head pose, a new database, an automatic system and remaining challenges. In: Proceedings of the 2013 Conference on British Machine Vision. 2013. Presented at: BMVC '13; September 9-13, 2013:1-13; Bristol, UK. URL: https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=03f075e95638bc66e687badd97a58c5de67e58e6 [CrossRef]
- Chu Y, Zhao X, Han J, Su Y. Physiological signal-based method for measurement of pain intensity. Front Neurosci. May 26, 2017;11:279. [FREE Full text] [CrossRef] [Medline]
- Susam BT, Akcakaya M, Nezamfar H, Diaz D, Xu XL, de Sa VR, et al. Automated pain assessment using electrodermal activity data and machine learning. Annu Int Conf IEEE Eng Med Biol Soc. Jul 2018;2018:372-375. [FREE Full text] [CrossRef] [Medline]
- Jiang M, Mieronkoski R, Syrjälä E, Anzanpour A, Terävä V, Rahmani AM, et al. Acute pain intensity monitoring with the classification of multiple physiological parameters. J Clin Monit Comput. Jun 26, 2019;33(3):493-507. [FREE Full text] [CrossRef] [Medline]
- Mark JN, Hu Y, Luk K. ICA-based ECG removal from surface electromyography and its effect on low back pain assessment. In: Proceedings of the 3rd International IEEE/EMBS Conference on Neural Engineering. 2007. Presented at: CNE '07; May 2-5, 2007:646-649; Kohala Coast, HI. URL: https://ieeexplore.ieee.org/document/4227361 [CrossRef]
- Badura A, Masłowska A, Myśliwiec A, Piętka E. Multimodal signal analysis for pain recognition in physiotherapy using wavelet scattering transform. Sensors (Basel). Feb 12, 2021;21(4):1311. [FREE Full text] [CrossRef] [Medline]
- Prkachin KM, Solomon PE. The structure, reliability and validity of pain expression: evidence from patients with shoulder pain. Pain. Oct 15, 2008;139(2):267-274. [CrossRef] [Medline]
- Williams AC. Facial expression of pain: an evolutionary account. Behav Brain Sci. Aug 11, 2002;25(4):439-455. [CrossRef] [Medline]
- Ashraf AB, Lucey S, Cohn JF, Chen T, Ambadar Z, Prkachin KM, et al. The painful face - pain expression recognition using active appearance models. Image Vis Comput. Oct 2009;27(12):1788-1796. [FREE Full text] [CrossRef] [Medline]
- Lucey P, Cohn JF, Matthews I, Lucey S, Sridharan S, Howlett J, et al. Automatically detecting pain in video through facial action units. IEEE Trans Syst Man Cybern B Cybern. Jun 2011;41(3):664-674. [FREE Full text] [CrossRef] [Medline]
- Gholami B, Haddad WM, Tannenbaum AR. Relevance vector machine learning for neonate pain intensity assessment using digital imaging. IEEE Trans Biomed Eng. Jun 2010;57(6):1457-1466. [FREE Full text] [CrossRef] [Medline]
- Hammal Z, Cohn JF. Automatic detection of pain intensity. Proc ACM Int Conf Multimodal Interact. Oct 2012;2012:47-52. [FREE Full text] [CrossRef] [Medline]
- Kaltwang S, Rudovic O, Pantic M. Continuous pain intensity estimation from facial expressions. In: Proceedings of the 8th International Symposium Conference on Advances in Visual Computing. 2012. Presented at: ISVC '12; July 16-18, 2012:368-377; Crete, Greece. URL: https://link.springer.com/chapter/10.1007/978-3-642-33191-6_36 [CrossRef]
- Khan RA, Meyer A, Konik H, Bouakaz S. Pain detection through shape and appearance features. In: Proceedings of the 2013 IEEE International Conference on Multimedia and Expo. 2013. Presented at: ICME '13; July 15-19, 2013:1-6; San Jose, CA. URL: https://ieeexplore.ieee.org/document/6607608 [CrossRef]
- Pedersen H. Learning appearance features for pain detection using the UNBC-McMaster shoulder pain expression archive database. In: Proceedings of the 10th International Conference on Computer Vision Systems. 2015. Presented at: ICVS '15; July 6-9, 2015:10-36; Copenhagen, Denmark. URL: https://dl.acm.org/doi/10.1007/978-3-319-20904-3_12 [CrossRef]
- Egede JO, Song S, Olugbade TA, Wang C, Williams AC, Meng G, et al. EMOPAIN challenge 2020: multimodal pain evaluation from facial and bodily expressions. In: Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition. 2020. Presented at: FG' 20; November 16-20, 2020:849-856; Buenos Aires, Argentina. URL: https://dl.acm.org/doi/10.1109/FG47880.2020.00078 [CrossRef]
- Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv. Preprint posted online September 4, 2014. [FREE Full text]
- He K, Zhang X, Rennke S, Sun J. Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. 2016. Presented at: CVPR '16; June 27-30, 2016:770-778; Las Vegas, NV. URL: https://ieeexplore.ieee.org/document/7780459 [CrossRef]
- Rudovic O, Tobis N, Kaltwang S, Schuller B, Rueckert D, Cohn JF, et al. Personalized federated deep learning for pain estimation from face images. arXiv. Preprint posted online January 12, 2021. [FREE Full text]
- Hosseini E, Fang R, Zhang R, Chuah CN, Orooji M, Rafatirad S, et al. Convolution neural network for pain intensity assessment from facial expression. Annu Int Conf IEEE Eng Med Biol Soc. Jul 2022;2022:2697-2702. [CrossRef] [Medline]
- Barsoum E, Zhang C, Ferrer CC, Zhang Z. Training deep networks for facial expression recognition with crowd-sourced label distribution. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction. 2016. Presented at: ICMI '16; November 12-16, 2016:278-283; Tokyo, Japan. URL: https://dl.acm.org/doi/10.1145/2993148.2993165 [CrossRef]
- Huang D, Xia Z, Li L, Wang K, Feng X. Pain-awareness multistream convolutional neural network for pain estimation. J Electron Imag. Jul 1, 2019;28(04):1. [CrossRef]
- Semwal A, Londhe ND. ECCNet: an ensemble of compact convolution neural network for pain severity assessment from face images. In: Proceedings of the 11th International Conference on Cloud Computing, Data Science & Engineering. 2021. Presented at: Confluence '21; January 28-29, 2021:761-766; Noida, India. URL: https://ieeexplore.ieee.org/document/9377197 [CrossRef]
- Kharghanian R, Peiravi A, Moradi F. Pain detection from facial images using unsupervised feature learning approach. In: Proceedings of the 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2016. Presented at: EMBC '16; August 16-20, 2016:419-422; Orlando, FL. URL: https://ieeexplore.ieee.org/document/7590729 [CrossRef]
- Kharghanian R, Peiravi A, Moradi F, Iosifidis A. Pain detection using batch normalized discriminant restricted Boltzmann machine layers. J Vis Commun Image Represen. Apr 2021;76:103062. [CrossRef]
- Semwal A, Londhe ND. MVFNet: a multi-view fusion network for pain intensity assessment in unconstrained environment. Biomed Signal Process Control. May 2021;67:102537. [CrossRef]
- Alghamdi T, Alaghband G. Facial expressions based automatic pain assessment system. Appl Sci. Jun 24, 2022;12(13):6423. [CrossRef]
- Dai L, Broekens J, Truong KP. Real-time pain detection in facial expressions for health robotics. In: Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos. 2019. Presented at: ACIIW '19; September 3-6, 2019:277-283; Cambridge, UK. URL: https://ieeexplore.ieee.org/document/8925192 [CrossRef]
- Karamitsos I, Seladji I, Modak S. A modified CNN network for automatic pain identification using facial expressions. J Softw Eng Appl. 2021;14(08):400-417. [CrossRef]
- Barua PD, Baygin N, Dogan S, Baygin M, Arunkumar N, Fujita H, et al. Automated detection of pain levels using deep feature extraction from shutter blinds-based dynamic-sized horizontal patches with facial images. Sci Rep. Oct 14, 2022;12(1):17297. [FREE Full text] [CrossRef] [Medline]
- Zamzmi G, Paul R, Goldgof D, Kasturi R, Sun Y. Pain assessment from facial expression: neonatal convolutional neural network (N-CNN). In: Proceedings of the 2019 International Joint Conference on Neural Networks. 2019. Presented at: IJCNN '19; July 14-19, 2019:1-7; Budapest, Hungary. URL: https://ieeexplore.ieee.org/document/8851879 [CrossRef]
- Witherow MA, Samad MD, Diawara N, Bar HY, Iftekharuddin KM. Deep adaptation of adult-child facial expressions by fusing landmark features. IEEE Trans Affective Comput. Jul 2024;15(3):847-858. [CrossRef]
- Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Ensemble neural network approach detecting pain intensity from facial expressions. Artif Intell Med. Sep 2020;109:101954. [CrossRef] [Medline]
- Bargshady G, Zhou X, Deo RC, Soar J, Whittaker F, Wang H. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Syst Appl. Jul 2020;149:113305. [CrossRef]
- Tavakolian M, Hadid A. Deep spatiotemporal representation of the face for automatic pain intensity estimation. In: Proceedings of the 24th International Conference on Pattern Recognition. 2018. Presented at: ICPR '18; August 20-24, 2018:350-354; Beijing, China. URL: https://ieeexplore.ieee.org/document/8545324 [CrossRef]
- Tavakolian M, Hadid A. A spatiotemporal convolutional neural network for automatic pain intensity estimation from facial dynamics. Int J Comput Vis. Jun 25, 2019;127(10):1413-1425. [CrossRef]
- Huang Y, Qing L, Xu S, Wang L, Peng Y. HybNet: a hybrid network structure for pain intensity estimation. Vis Comput. Feb 04, 2021;38(3):871-882. [CrossRef]
- Wang J, Sun H. Pain intensity estimation using deep spatiotemporal and handcrafted features. IEICE Trans Inf Syst. 2018;E101.D(6):1572-1580. [CrossRef]
- de Melo WC, Granger E, Lopez MB. Facial expression analysis using decomposed multiscale spatiotemporal networks. Expert Syst Appl. Feb 2024;236:121276. [CrossRef]
- Granger E, Cardinal P, Praveen RG. Deep domain adaptation for ordinal regression of pain intensity estimation using weakly-labelled videos. arXiv. Preprint posted online August 13, 2020. [FREE Full text]
- Praveen RG, Granger E, Cardinal P. Deep weakly supervised domain adaptation for pain localization in videos. In: Proceedings of the 15th IEEE International Conference on Automatic Face and Gesture Recognition. 2020. Presented at: FG '20; November 16-20, 2020:473-480; Buenos Aires, Argentina. URL: https://ieeexplore.ieee.org/document/9320216 [CrossRef]
- Carreira J, Zisserman A. Quo vadis, action recognition? A new model and the kinetics dataset. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition. 2017. Presented at: CVPR '17; July 21-26, 2017:4724-4733; Honolulu, HI. URL: https://ieeexplore.ieee.org/document/8099985 [CrossRef]
- Shu L, Xie J, Yang M, Li Z, Li Z, Liao D, et al. A review of emotion recognition using physiological signals. Sensors (Basel). Jun 28, 2018;18(7):2074. [FREE Full text] [CrossRef] [Medline]
- Li W, Zhang Z, Song A. Physiological-signal-based emotion recognition: an odyssey from methodology to philosophy. Measurement. Feb 2021;172:108747. [CrossRef]
- Panavaranan P, Wongsawat Y. EEG-based pain estimation via fuzzy logic and polynomial kernel support vector machine. In: Proceedings of the 2013 Biomedical Engineering International Conference. 2013. Presented at: BMEiCon '13; October 23-25, 2013:1-4; Amphur Muang, Thailand. URL: https://ieeexplore.ieee.org/document/6687668 [CrossRef]
- Vijayakumar V, Case M, Shirinpour S, He B. Quantifying and characterizing tonic thermal pain across subjects from EEG data using random forest models. IEEE Trans Biomed Eng. Dec 2017;64(12):2988-2996. [FREE Full text] [CrossRef] [Medline]
- Wager TD, Atlas LY, Lindquist MA, Roy M, Woo C, Kross E. An fMRI-based neurologic signature of physical pain. N Engl J Med. Apr 11, 2013;368(15):1388-1397. [FREE Full text] [CrossRef] [Medline]
- Meeuse JJ, Löwik MS, Löwik SA, Aarden E, van Roon AM, Gans RO, et al. Heart rate variability parameters do not correlate with pain intensity in healthy volunteers. Pain Med. Aug 01, 2013;14(8):1192-1201. [CrossRef] [Medline]
- Hosseini E, Fang R, Zhang R, Rafatirad S, Homayoun H. Emotion and stress recognition utilizing galvanic skin response and wearable technology: a real-time approach for mental health care. In: Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine. 2023. Presented at: BIBM '23; December 5-8, 2023:1125-1131; Istanbul, Turkey. URL: https://www.computer.org/csdl/proceedings-article/bibm/2023/10386049/1TObUqDKemQ [CrossRef]
- Hosseini E, Fang R, Zhang R, Parenteau A, Hang S, Rafatirad S. A low cost EDA-based stress detection using machine learning. In: Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine. 2022. Presented at: BIBM '22; December 6-8, 2022:2619-2623; Las Vegas, NV. URL: https://ieeexplore.ieee.org/document/9995093 [CrossRef]
- Merletti R, Farina D. Surface Electromyography: Physiology, Engineering, and Applications. Hoboken, NJ. John Wiley & Sons; 2016.
- Srinivasan J, Balasubramanian V. Low back pain and muscle fatigue due to road cycling—an sEMG study. J Bodyw Mov Ther. Jul 2007;11(3):260-266. [CrossRef]
- Jiang M, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H. Facial expression recognition with sEMG method. In: Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, 2015. Presented at: IUCC '15; October 26-28, 2015:981-988; Liverpool, UK. URL: https://ieeexplore.ieee.org/document/7363189 [CrossRef]
- Zhang Z, Zhang R, Chang CW, Guo Y, Chi YW, Pan T. iWRAP: a theranostic wearable device with real-time vital monitoring and auto-adjustable compression level for venous thromboembolism. IEEE Trans Biomed Eng. Sep 2021;68(9):2776-2786. [CrossRef] [Medline]
- Zhang R, Fang R, Fang C, Homayoun H, Berk GG. Privee: a wearable for real-time bladder monitoring system. In: Proceedings of the Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing. 2023. Presented at: UbiComp/ISWC '23; October 8-12, 2023:291-295; Cancun, Mexico. URL: https://dl.acm.org/doi/10.1145/3594739.3610782 [CrossRef]
- Loggia ML, Juneau M, Bushnell CM. Autonomic responses to heat pain: heart rate, skin conductance, and their relation to verbal ratings and stimulus intensity. Pain. Mar 2011;152(3):592-598. [CrossRef] [Medline]
- Hautala AJ, Karppinen J, Seppanen T. Short-term assessment of autonomic nervous system as a potential tool to quantify pain experience. Annu Int Conf IEEE Eng Med Biol Soc. Aug 2016;2016:2684-2687. [CrossRef] [Medline]
- Ajayi TA, Salongo L, Zang Y, Wineinger N, Steinhubl S. Mobile health-collected biophysical markers in children with serious illness-related pain. J Palliat Med. Apr 01, 2021;24(4):580-588. [FREE Full text] [CrossRef] [Medline]
- Nazari G, MacDermid JC, Sinden KE, Richardson J, Tang A. Reliability of Zephyr bioharness and Fitbit charge measures of heart rate and activity at rest, during the modified Canadian aerobic fitness test, and recovery. J Strength Cond Res. Feb 2019;33(2):559-571. [CrossRef] [Medline]
- Rawstorn JC, Gant N, Warren I, Doughty RN, Lever N, Poppe KK, et al. Measurement and data transmission validity of a multi-biosensor system for real-time remote exercise monitoring among cardiac patients. JMIR Rehabil Assist Technol. Mar 20, 2015;2(1):e2. [FREE Full text] [CrossRef] [Medline]
- Løberg F, Goebel V, Plagemann T. Quantifying the signal quality of low-cost respiratory effort sensors for sleep apnea monitoring. In: Proceedings of the 3rd International Workshop on Multimedia for Personal Health and Health Care. 2018. Presented at: HealthMedia '18; October 22, 2018:3-11; Seoul, Republic of Korea. URL: https://dl.acm.org/doi/10.1145/3264996.3264998 [CrossRef]
- Fang R, Zhang R, Hosseini E, Fang C, Rafatirad S, Homayoun H. Introducing an open-source Python toolkit for machine learning research in physiological signal based affective computing. In: Proceedings of the 2023 IEEE International Conference on Bioinformatics and Biomedicine. 2023. Presented at: BIBM '23; December 5-8, 2023:1890-1894; Istanbul, Turkiye. URL: https://ieeexplore.ieee.org/document/10385965 [CrossRef]
- Makowski D, Pham T, Lau ZJ, Brammer JC, Lespinasse F, Pham H, et al. NeuroKit2: a Python toolbox for neurophysiological signal processing. Behav Res Methods. Aug 2021;53(4):1689-1696. [CrossRef] [Medline]
- Cabañero-Gomez L, Hervas R, Gonzalez I, Rodriguez-Benitez L. eeglib: a Python module for EEG feature extraction. SoftwareX. Jul 2021;15:100745. [CrossRef]
- Iashin V, Korbar B, Georgievski B, Hoppe J. v-iashin / video_features. GitHub. URL: https://github.com/v-iashin/video_features [accessed 2024-04-29]
- Lenain R, Weston J, Shivkumar A, Fristed E. Surfboard: audio feature extraction for modern machine learning. arXiv. Preprint posted online May 18, 2020. [FREE Full text] [CrossRef]
- Shaffer F, Ginsberg JP. An overview of heart rate variability metrics and norms. Front Public Health. Sep 28, 2017;5:258. [FREE Full text] [CrossRef] [Medline]
- Walter S, Gruss S, Limbrecht-Ecklundt K, Traue HC, Werner P, Al-Hamadi A, et al. Automatic pain quantification using autonomic parameters. Psychol Neurosci. 2014;7(3):363-380. [CrossRef]
- Phinyomark A, Phukpattaranont P, Limsakul C. Feature reduction and selection for EMG signal classification. Expert Syst Appl. Jun 2012;39(8):7420-7431. [CrossRef]
- Phinyomark A, Scheme E. An investigation of temporally inspired time domain features for electromyographic pattern recognition. In: Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2018. Presented at: EMBC '18; July 18-21, 2018:5236-5240; Honolulu, HI. URL: https://ieeexplore.ieee.org/document/8513427 [CrossRef]
- Cao C, Slobounov S. Application of a novel measure of EEG non-stationarity as 'Shannon- entropy of the peak frequency shifting' for detecting residual abnormalities in concussed individuals. Clin Neurophysiol. Jul 2011;122(7):1314-1321. [FREE Full text] [CrossRef] [Medline]
- Pincus SM. Approximate entropy as a measure of system complexity. Proc Natl Acad Sci U S A. Mar 15, 1991;88(6):2297-2301. [FREE Full text] [CrossRef] [Medline]
- Kosko B. Fuzzy entropy and conditioning. Inf Sci. Dec 1986;40(2):165-174. [CrossRef]
- Richman JS, Moorman JR. Physiological time-series analysis using approximate entropy and sample entropy. Am J Physiol Heart Circ Physiol. Jun 2000;278(6):H2039-H2049. [FREE Full text] [CrossRef] [Medline]
- Lin J. Divergence measures based on the Shannon entropy. IEEE Trans Inform Theory. 1991;37(1):145-151. [CrossRef]
- Zhang A, Yang B, Huang L. Feature extraction of EEG signals using power spectral entropy. In: Proceedings of the 2008 International Conference on BioMedical Engineering and Informatics. 2008. Presented at: BMEI '08; May 27-30, 2008:435-439; Sanya, China. URL: https://ieeexplore.ieee.org/document/4549210 [CrossRef]
- Kennedy HL. A new statistical measure of signal similarity. In: Proceedings of the 2007 Conference on Information, Decision and Control, Adelaide. 2007. Presented at: IDC '07; February 12-14, 2007:112-117; Adelaide, Australia. URL: https://ieeexplore.ieee.org/document/4252487 [CrossRef]
- Dukic S, Iyer PM, Mohr K, Hardiman O, Lalor EC, Nasseroleslami B. Estimation of coherence using the median is robust against EEG artefacts. Annu Int Conf IEEE Eng Med Biol Soc. Jul 2017;2017:3949-3952. [CrossRef] [Medline]
- Chen HM, Varshney PK, Arora MK. Performance of mutual information similarity measure for registration of multitemporal remote sensing images. IEEE Trans Geosci Remote Sensing. Nov 2003;41(11):2445-2454. [CrossRef]
- Behzadfar N. A brief overview on analysis and feature extraction of electroencephalogram signals. Signal Process Renew Energy. 2022;6(1):39-64. [FREE Full text]
- van der Miesen MM, Lindquist MA, Wager TD. Neuroimaging-based biomarkers for pain: state of the field and current directions. Pain Rep. 2019;4(4):e751. [FREE Full text] [CrossRef] [Medline]
- Werner P, Al-Hamadi A, Niese R, Gruss S, Traue HC. Automatic pain recognition from video and biomedical signals. In: Proceedings of the 22nd International Conference on Pattern Recognition. 2014. Presented at: ICPR '14; August 24-28, 2014:4582-4587; Stockholm, Sweden. URL: https://ieeexplore.ieee.org/document/6977497 [CrossRef]
- Gruss S, Treister R, Werner P, Traue HC, Crawcour S, Andrade A, et al. Pain intensity recognition rates via biopotential feature patterns with support vector machines. PLoS One. Oct 16, 2015;10(10):e0140330. [FREE Full text] [CrossRef] [Medline]
- Campbell E, Phinyomark A, Scheme E. Feature extraction and selection for pain recognition using peripheral physiological signals. Front Neurosci. May 7, 2019;13:437. [FREE Full text] [CrossRef] [Medline]
- Kachele M, Thiam P, Amirian M, Schwenker F, Palm G. Methods for person-centered continuous pain intensity assessment from bio-physiological channels. IEEE J Sel Top Signal Process. Aug 2016;10(5):854-864. [CrossRef]
- Fang R, Zhang R, Hosseini SM, Faghih M, Rafatirad S, Rafatirad S, et al. Pain level modeling of intensive care unit patients with machine learning methods: an effective congeneric clustering-based approach. In: Proceedings of the 4th International Conference on Intelligent Medicine and Image Processing. 2022. Presented at: IMIP '22; March 18-21, 2022:89-95; Tianjin, China. URL: https://dl.acm.org/doi/pdf/10.1145/3524086.3524100 [CrossRef]
- Nakano K, Ota Y, Ukai H, Nakamura K, Fujita H. Frequency detection method based on recursive DFT algorithm. In: Proceedings of the 14th International Conference on Power Systems Computation. 2002. Presented at: PSCC '02; June 24-28, 2002:1-7; Seville, Spain. URL: https://www.researchgate.net/publication/255601650_Frequency_detection_method_based_on_recursive_DFT_algorithm
- Chen W, Zhuang J, Yu W, Wang Z. Measuring complexity using FuzzyEn, ApEn, and SampEn. Med Eng Phys. Jan 2009;31(1):61-68. [CrossRef] [Medline]
- Wolpert DH, Macready WG. No free lunch theorems for optimization. IEEE Trans Evol Computat. 1997;1(1):67-82. [CrossRef]
- Bellmann P, Thiam P, Kestler HA, Schwenker F. Machine learning-based pain intensity estimation: where pattern recognition meets chaos theory—an example based on the Biovid heat pain database. IEEE Access. 2022;10:102770-102777. [CrossRef]
- Gouverneur P, Li F, Adamczyk WM, Szikszay TM, Luedtke K, Grzegorzek M. Comparison of feature extraction methods for physiological signals for heat-based pain recognition. Sensors (Basel). Jul 15, 2021;21(14):4838. [FREE Full text] [CrossRef] [Medline]
- Othman E, Werner P, Saxen F, Fiedler MA, Al-Hamadi A. An automatic system for continuous pain intensity monitoring based on analyzing data from Uni-, Bi-, and multi-modality. Sensors (Basel). Jul 01, 2022;22(13):4992. [FREE Full text] [CrossRef] [Medline]
- Pouromran F, Lin Y, Kamarthi S. Personalized deep Bi-LSTM RNN based model for pain intensity classification using EDA signal. Sensors. Oct 22, 2022;22(21):8087. [CrossRef]
- Thiam P, Hihn H, Braun DA, Kestler HA, Schwenker F. Multi-modal pain intensity assessment based on physiological signals: a deep learning perspective. Front Physiol. Sep 1, 2021;12:720464. [FREE Full text] [CrossRef] [Medline]
- Cortes C, Vapnik V. Support-vector networks. Mach Learn. 1995;20:297. [FREE Full text]
- Verikas A, Gelzinis A, Bacauskiene M. Mining data with random forests: a survey and results of new tests. Pattern Recognit. Feb 2011;44(2):330-349. [CrossRef]
- Breiman L, Friedman JH, Olshen RA, Stone CJ. Classification and Regression Trees. New York, NY. Routledge; 2017.
- Breiman L. Random forests. Mach Learn. 2001;45(1):5-32. [FREE Full text]
- Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L. Explaining explanations: an overview of interpretability of machine learning. In: Proceedings of the 5th International Conference on Data Science and Advanced Analytics. 2018. Presented at: DSAA '18; October 1-3, 2018:80-89; Turin, Italy. URL: https://ieeexplore.ieee.org/document/8631448 [CrossRef]
- Pal M, Mather PM. An assessment of the effectiveness of decision tree methods for land cover classification. Remote Sens Environ. Aug 2003;86(4):554-565. [CrossRef]
- Fang R, Zhang R, Hosseini E, Parenteau AM, Hang S, Rafatirad S. Prevent over-fitting and redundancy in physiological signal analyses for stress detection. In: Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine. 2022. Presented at: BIBM '22; December 6-8, 2022:2585-2588; Las Vegas, NV. URL: https://ieeexplore.ieee.org/document/9995121 [CrossRef]
- Naeini EK, Shahhosseini S, Subramanian A, Yin T, Rahmani AM, Dutt N. An edge-assisted and smart system for real-time pain monitoring. In: Proceedings of the 2019 IEEE/ACM International Conference on Connected Health: Applications, Systems and Engineering Technologies. 2019. Presented at: CHASE '19; September 25-27, 2019:47-52; Arlington, VA. URL: https://ieeexplore.ieee.org/document/8908653 [CrossRef]
- Werner P, Al-Hamadi A, Gruss S, Walter S. Twofold-multimodal pain recognition with the X-ITE pain database. In: Proceedings of the 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos. 2019. Presented at: ACIIW '19; September 3-6, 2019:290-296; Cambridge, UK. URL: https://ieeexplore.ieee.org/document/8925061 [CrossRef]
- Fang C, Miao N, Srivastav S, Liu J, Zhang R, Fang R, Asmita, et al. Large language models for code analysis: do LLMS really do their job? arXiv. Preprint posted online October 18, 2023. [FREE Full text]
- Lopez-Martinez D, Picard R. Multi-task neural networks for personalized pain recognition from physiological signals. In: Proceedings of the 7th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos. 2017. Presented at: ACIIW '17; October 23-26, 2017:181-184; San Antonio, TX. URL: https://www.computer.org/csdl/proceedings-article/aciiw/2017/08272611/12OmNAZfxKZ [CrossRef]
- Salekin MS, Zamzmi G, Goldgof D, Kasturi R, Ho T, Sun Y. Multimodal spatio-temporal deep learning approach for neonatal postoperative pain assessment. Comput Biol Med. Feb 2021;129:104150. [FREE Full text] [CrossRef] [Medline]
- Pinzon-Arenas JO, Kong Y, Chon KH, Posada-Quintero HF. Design and evaluation of deep learning models for continuous acute pain detection based on phasic electrodermal activity. IEEE J Biomed Health Inform. Sep 2023;27(9):4250-4260. [CrossRef] [Medline]
- Bertrand AL, Garcia JB, Viera EB, Santos AM, Bertrand RH. Pupillometry: the influence of gender and anxiety on the pain response. Pain Physician. 2013;16(3):E257-E266. [FREE Full text] [CrossRef] [Medline]
- Chapman CR, Oka S, Bradshaw DH, Jacobson RC, Donaldson GW. Phasic pupil dilation response to noxious stimulation in normal volunteers: relationship to brain evoked potentials and pain report. Psychophysiology. Jan 20, 1999;36(1):44-52. [CrossRef] [Medline]
- Eisenach JC, Curry R, Aschenbrenner CA, Coghill RC, Houle TT. Pupil responses and pain ratings to heat stimuli: Reliability and effects of expectations and a conditioning pain stimulus. J Neurosci Methods. Mar 01, 2017;279:52-59. [FREE Full text] [CrossRef] [Medline]
- Wang L, Guo Y, Dalip B, Xiao Y, Urman RD, Lin Y. An experimental study of objective pain measurement using pupillary response based on genetic algorithm and artificial neural network. Appl Intell. May 17, 2021;52(2):1145-1156. [CrossRef]
- Kächele M, Werner P, Al-Hamadi A, Palm G, Walter S, Schwenker F. Bio-visual fusion for person-independent recognition of pain intensity. In: Proceedings of the 12th International Workshop on Multiple Classifier Systems. 2015. Presented at: MCS '15; June 29-July 1, 2015:220-230; Günzburg, Germany. URL: https://link.springer.com/chapter/10.1007/978-3-319-20248-8_19 [CrossRef]
- Kächele M, Thiam P, Amirian M, Werner P, Walter S, Schwenker F, et al. Multimodal data fusion for person-independent, continuous estimation of pain intensity. In: Proceedings of the 16th International Conference on Engineering Applications of Neural Networks. 2015. Presented at: EANN '15; September 25-28, 2015:275-285; Rhodes, Greece. URL: https://link.springer.com/chapter/10.1007/978-3-319-23983-5_26 [CrossRef]
- Thiam P, Kessler V, Schwenker F. Hierarchical combination of video features for personalised pain level recognition. In: Proceedings of the 2017 Conference on European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. 2017. Presented at: ESANN '17; April 26-28, 2017:465-470; Bruges, Belgium. URL: https://www.esann.org/sites/default/files/proceedings/legacy/es2017-104.pdf
- Kessler V, Thiam P, Amirian M, Schwenker F. Multimodal fusion including camera photoplethysmography for pain recognition. In: Proceedings of the 2017 International Conference on Companion Technology. 2017. Presented at: ICCT '17; September 11-13, 2017:1-4; Ulm, Germany. URL: https://ieeexplore.ieee.org/document/8287083 [CrossRef]
- Thiam P, Schwenker F. Multi-modal data fusion for pain intensity assessment and classification. In: Proceedings of the 7th International Conference on Image Processing Theory, Tools and Applications. 2017. Presented at: IPTA '17; November 28-December 1, 2017:1-6; Montreal, QC. URL: https://ieeexplore.ieee.org/document/8310115 [CrossRef]
- Bellmann P, Thiam P, Schwenker F. Dominant channel fusion architectures-an intelligent late fusion approach. In: Proceedings of the 2020 International Joint Conference on Neural Networks. 2020. Presented at: IJCNN '20; July 19-24, 2020:1-8; Glasgow, Scotland. URL: https://ieeexplore.ieee.org/document/9206814 [CrossRef]
- Bellmann P, Thiam P, Schwenker F. Using meta labels for the training of weighting models in a sample-specific late fusion classification architecture. In: Proceedings of the 25th International Conference on Pattern Recognition. 2021. Presented at: ICPR '21; January 10-15, 2021:2604-2611; Milan, Italy. URL: https://ieeexplore.ieee.org/document/9412509 [CrossRef]
- Oliveira F, Costa DG, Assis F, Silva I. Internet of intelligent things: a convergence of embedded systems, edge computing and machine learning. Internet Things. Jul 2024;26:101153. [CrossRef]
- Xiong Y, Zeng Z, Chakraborty R, Tan M, Fung G, Li Y, et al. Nyströmformer: a Nyström-based algorithm for approximating self-attention. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021. Presented at: AAAI '21; February 2-9, 2021:14138-14148; Vancouver, BC. URL: https://tinyurl.com/yc3epb39 [CrossRef]
- Nielsen CS, Staud R, Price DD. Individual differences in pain sensitivity: measurement, causation, and consequences. J Pain. Mar 2009;10(3):231-237. [FREE Full text] [CrossRef] [Medline]
- Jiang M, Rosio R, Salanterä S, Rahmani AM, Liljeberg P, da Silva DS, et al. Personalized and adaptive neural networks for pain detection from multi-modal physiological features. Expert Syst Appl. Jan 2024;235:121082. [CrossRef]
- Casti P, Mencattini A, Filippi J, D'Orazio M, Comes MC, Giuseppe DD. A personalized assessment platform for non-invasive monitoring of pain. In: Proceedings of the 2020 IEEE International Symposium on Medical Measurements and Applications. 2020. Presented at: MeMeA '20; June 1-4, 2020:1-5; Bari, Italy. URL: https://ieeexplore.ieee.org/document/9137138 [CrossRef]
- Lopez Martinez D, Rudovic O, Picard R. Personalized automatic estimation of self-reported pain intensity from facial expressions. In: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2017. Presented at: CVPRW '17; July 21-26, 2017:2318-2327; Honolulu, HI. URL: https://ieeexplore.ieee.org/document/8015020 [CrossRef]
- Zhang R, Fang R, Zhang Z, Hosseini E, Orooji M, Homayoun H. Short: real-time bladder monitoring by bio-impedance analysis to aid urinary incontinence. In: Proceedings of the 2023 IEEE/ACM Conference on Connected Health: Applications, Systems and Engineering Technologies. 2023. Presented at: CHASE '23; June 21-23, 2023:138-142; Orlando, FL. URL: https://ieeexplore.ieee.org/document/10183756 [CrossRef]
- Kong Y, Posada-Quintero HF, Chon KH. Real-time high-level acute pain detection using a smartphone and a wrist-worn electrodermal activity sensor. Sensors (Basel). Jun 08, 2021;21(12):3956. [FREE Full text] [CrossRef] [Medline]
- Fang R, Zhang R, Hosseini E, Parenteau AM, Hang S, Rafatirad S. Towards generalized ML model in automated physiological arousal computing: a transfer learning-based domain generalization approach. In: Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine. 2022. Presented at: BIBM '22; December 6-8, 2022:2577-2584; Las Vegas, NV. URL: https://ieeexplore.ieee.org/document/9995340 [CrossRef]
- Kächele M, Amirian M, Thiam P, Werner P, Walter S, Palm G, et al. Adaptive confidence learning for the personalization of pain intensity estimation systems. Evol Syst. Jul 16, 2016;8(1):71-83. [CrossRef]
- Chen J, Liu X, Tu P, Aragones A. Learning person-specific models for facial expression and action unit recognition. Pattern Recognit Lett. Nov 2013;34(15):1964-1970. [CrossRef]
Abbreviations
AU: action unit |
AUC: area under the curve |
B-CNN: bilinear convolutional neural network |
CNN: convolutional neural network |
DMSN: Decomposed Multiscale Spatiotemporal Network |
EDA: electrodermal activity |
FACE-BE-SELF: Facial Expressions Fusing Betamix Selected Landmark Features |
FACS: Facial Action Coding System |
fMRI: functional magnetic resonance imaging |
fNIRS: functional near-infrared spectroscopy |
HF: high-frequency |
HOG: histogram of oriented gradients |
HRV: heart rate variability |
ICU: intensive care unit |
LBP: local binary pattern |
LF: low-frequency |
LSTM: long short-term memory |
ML: machine learning |
PCA: principal component analysis |
RF: random forest |
RGB: red, green, blue color model |
RNN: recurrent neural network |
RVR: relevance vector regression |
sEMG: surface electromyogram |
SNS: sympathetic nervous system |
SVM: support vector machine |
Edited by J-L Raisaro; submitted 22.09.23; peer-reviewed by A Naser, S Kisvarday, A Subramanian, P Lakshman, A Mazumder; comments to author 11.04.24; revised version received 06.06.24; accepted 23.07.24; published 24.02.25.
Copyright©Ruijie Fang, Elahe Hosseini, Ruoyu Zhang, Chongzhou Fang, Setareh Rafatirad, Houman Homayoun. Originally published in JMIR AI (https://ai.jmir.org), 24.02.2025.
This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR AI, is properly cited. The complete bibliographic information, a link to the original publication on https://www.ai.jmir.org/, as well as this copyright and license information must be included.